2026-03-26 01:36:51.665346 | Job console starting 2026-03-26 01:36:51.682911 | Updating git repos 2026-03-26 01:36:51.752921 | Cloning repos into workspace 2026-03-26 01:36:51.969120 | Restoring repo states 2026-03-26 01:36:51.992189 | Merging changes 2026-03-26 01:36:51.992210 | Checking out repos 2026-03-26 01:36:52.252103 | Preparing playbooks 2026-03-26 01:36:52.875974 | Running Ansible setup 2026-03-26 01:36:57.463694 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2026-03-26 01:36:58.230534 | 2026-03-26 01:36:58.230693 | PLAY [Base pre] 2026-03-26 01:36:58.250860 | 2026-03-26 01:36:58.251024 | TASK [Setup log path fact] 2026-03-26 01:36:58.281721 | orchestrator | ok 2026-03-26 01:36:58.299456 | 2026-03-26 01:36:58.299594 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-03-26 01:36:58.329662 | orchestrator | ok 2026-03-26 01:36:58.341639 | 2026-03-26 01:36:58.341743 | TASK [emit-job-header : Print job information] 2026-03-26 01:36:58.383374 | # Job Information 2026-03-26 01:36:58.383571 | Ansible Version: 2.16.14 2026-03-26 01:36:58.383613 | Job: testbed-upgrade-stable-rc-ubuntu-24.04 2026-03-26 01:36:58.383653 | Pipeline: periodic-midnight 2026-03-26 01:36:58.383680 | Executor: 521e9411259a 2026-03-26 01:36:58.383703 | Triggered by: https://github.com/osism/testbed 2026-03-26 01:36:58.383727 | Event ID: 92c960495d4748b39c48d3d797ac3182 2026-03-26 01:36:58.390638 | 2026-03-26 01:36:58.390747 | LOOP [emit-job-header : Print node information] 2026-03-26 01:36:58.525519 | orchestrator | ok: 2026-03-26 01:36:58.525815 | orchestrator | # Node Information 2026-03-26 01:36:58.525924 | orchestrator | Inventory Hostname: orchestrator 2026-03-26 01:36:58.525973 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2026-03-26 01:36:58.526009 | orchestrator | Username: zuul-testbed03 2026-03-26 01:36:58.526043 | orchestrator | Distro: Debian 12.13 2026-03-26 01:36:58.526081 | orchestrator | Provider: static-testbed 2026-03-26 01:36:58.526116 | orchestrator | Region: 2026-03-26 01:36:58.526149 | orchestrator | Label: testbed-orchestrator 2026-03-26 01:36:58.526182 | orchestrator | Product Name: OpenStack Nova 2026-03-26 01:36:58.526214 | orchestrator | Interface IP: 81.163.193.140 2026-03-26 01:36:58.554341 | 2026-03-26 01:36:58.554518 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2026-03-26 01:36:59.039327 | orchestrator -> localhost | changed 2026-03-26 01:36:59.057044 | 2026-03-26 01:36:59.057207 | TASK [log-inventory : Copy ansible inventory to logs dir] 2026-03-26 01:37:00.132486 | orchestrator -> localhost | changed 2026-03-26 01:37:00.157692 | 2026-03-26 01:37:00.157847 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2026-03-26 01:37:00.476533 | orchestrator -> localhost | ok 2026-03-26 01:37:00.490675 | 2026-03-26 01:37:00.490898 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2026-03-26 01:37:00.526234 | orchestrator | ok 2026-03-26 01:37:00.546317 | orchestrator | included: /var/lib/zuul/builds/6d507829b6994532b2cddf15505f7f09/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2026-03-26 01:37:00.554497 | 2026-03-26 01:37:00.554600 | TASK [add-build-sshkey : Create Temp SSH key] 2026-03-26 01:37:02.344245 | orchestrator -> localhost | Generating public/private rsa key pair. 2026-03-26 01:37:02.344508 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/6d507829b6994532b2cddf15505f7f09/work/6d507829b6994532b2cddf15505f7f09_id_rsa 2026-03-26 01:37:02.344559 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/6d507829b6994532b2cddf15505f7f09/work/6d507829b6994532b2cddf15505f7f09_id_rsa.pub 2026-03-26 01:37:02.344592 | orchestrator -> localhost | The key fingerprint is: 2026-03-26 01:37:02.344622 | orchestrator -> localhost | SHA256:xWyU5t0EaFjHT/UHZmymIt/WPO4ZhZRyIYacKf83A2s zuul-build-sshkey 2026-03-26 01:37:02.344650 | orchestrator -> localhost | The key's randomart image is: 2026-03-26 01:37:02.344689 | orchestrator -> localhost | +---[RSA 3072]----+ 2026-03-26 01:37:02.344716 | orchestrator -> localhost | | +oB=o=..| 2026-03-26 01:37:02.344742 | orchestrator -> localhost | | o+Xo.+*+.| 2026-03-26 01:37:02.344767 | orchestrator -> localhost | | *=..O+ o| 2026-03-26 01:37:02.344791 | orchestrator -> localhost | | .o+ ++o..| 2026-03-26 01:37:02.344816 | orchestrator -> localhost | | So + =. .| 2026-03-26 01:37:02.344844 | orchestrator -> localhost | | . E B. | 2026-03-26 01:37:02.344870 | orchestrator -> localhost | | o o.+ | 2026-03-26 01:37:02.344920 | orchestrator -> localhost | | .o | 2026-03-26 01:37:02.344947 | orchestrator -> localhost | | .o | 2026-03-26 01:37:02.344986 | orchestrator -> localhost | +----[SHA256]-----+ 2026-03-26 01:37:02.345057 | orchestrator -> localhost | ok: Runtime: 0:00:01.282964 2026-03-26 01:37:02.354259 | 2026-03-26 01:37:02.354386 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2026-03-26 01:37:02.392509 | orchestrator | ok 2026-03-26 01:37:02.405723 | orchestrator | included: /var/lib/zuul/builds/6d507829b6994532b2cddf15505f7f09/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2026-03-26 01:37:02.415314 | 2026-03-26 01:37:02.415415 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2026-03-26 01:37:02.439039 | orchestrator | skipping: Conditional result was False 2026-03-26 01:37:02.449174 | 2026-03-26 01:37:02.449300 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2026-03-26 01:37:03.097792 | orchestrator | changed 2026-03-26 01:37:03.108070 | 2026-03-26 01:37:03.108207 | TASK [add-build-sshkey : Make sure user has a .ssh] 2026-03-26 01:37:03.418972 | orchestrator | ok 2026-03-26 01:37:03.428462 | 2026-03-26 01:37:03.428618 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2026-03-26 01:37:03.915703 | orchestrator | ok 2026-03-26 01:37:03.921961 | 2026-03-26 01:37:03.922085 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2026-03-26 01:37:04.395806 | orchestrator | ok 2026-03-26 01:37:04.406019 | 2026-03-26 01:37:04.406157 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2026-03-26 01:37:04.430361 | orchestrator | skipping: Conditional result was False 2026-03-26 01:37:04.441755 | 2026-03-26 01:37:04.441951 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2026-03-26 01:37:04.903351 | orchestrator -> localhost | changed 2026-03-26 01:37:04.928262 | 2026-03-26 01:37:04.928408 | TASK [add-build-sshkey : Add back temp key] 2026-03-26 01:37:05.309285 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/6d507829b6994532b2cddf15505f7f09/work/6d507829b6994532b2cddf15505f7f09_id_rsa (zuul-build-sshkey) 2026-03-26 01:37:05.309736 | orchestrator -> localhost | ok: Runtime: 0:00:00.018296 2026-03-26 01:37:05.322703 | 2026-03-26 01:37:05.322865 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2026-03-26 01:37:05.775798 | orchestrator | ok 2026-03-26 01:37:05.783552 | 2026-03-26 01:37:05.783684 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2026-03-26 01:37:05.818472 | orchestrator | skipping: Conditional result was False 2026-03-26 01:37:05.876512 | 2026-03-26 01:37:05.876649 | TASK [start-zuul-console : Start zuul_console daemon.] 2026-03-26 01:37:06.300993 | orchestrator | ok 2026-03-26 01:37:06.316735 | 2026-03-26 01:37:06.316868 | TASK [validate-host : Define zuul_info_dir fact] 2026-03-26 01:37:06.363497 | orchestrator | ok 2026-03-26 01:37:06.373808 | 2026-03-26 01:37:06.373958 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2026-03-26 01:37:06.703680 | orchestrator -> localhost | ok 2026-03-26 01:37:06.713705 | 2026-03-26 01:37:06.713825 | TASK [validate-host : Collect information about the host] 2026-03-26 01:37:07.961193 | orchestrator | ok 2026-03-26 01:37:07.979025 | 2026-03-26 01:37:07.979139 | TASK [validate-host : Sanitize hostname] 2026-03-26 01:37:08.043434 | orchestrator | ok 2026-03-26 01:37:08.051468 | 2026-03-26 01:37:08.051612 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2026-03-26 01:37:08.635490 | orchestrator -> localhost | changed 2026-03-26 01:37:08.648289 | 2026-03-26 01:37:08.648451 | TASK [validate-host : Collect information about zuul worker] 2026-03-26 01:37:09.110738 | orchestrator | ok 2026-03-26 01:37:09.119375 | 2026-03-26 01:37:09.119507 | TASK [validate-host : Write out all zuul information for each host] 2026-03-26 01:37:09.677427 | orchestrator -> localhost | changed 2026-03-26 01:37:09.697902 | 2026-03-26 01:37:09.698134 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2026-03-26 01:37:10.031794 | orchestrator | ok 2026-03-26 01:37:10.042021 | 2026-03-26 01:37:10.042178 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2026-03-26 01:37:29.116833 | orchestrator | changed: 2026-03-26 01:37:29.117171 | orchestrator | .d..t...... src/ 2026-03-26 01:37:29.117224 | orchestrator | .d..t...... src/github.com/ 2026-03-26 01:37:29.117260 | orchestrator | .d..t...... src/github.com/osism/ 2026-03-26 01:37:29.117292 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2026-03-26 01:37:29.117321 | orchestrator | RedHat.yml 2026-03-26 01:37:29.133794 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2026-03-26 01:37:29.133811 | orchestrator | RedHat.yml 2026-03-26 01:37:29.133868 | orchestrator | = 2.2.0"... 2026-03-26 01:37:39.634586 | orchestrator | - Finding latest version of hashicorp/null... 2026-03-26 01:37:39.653700 | orchestrator | - Finding terraform-provider-openstack/openstack versions matching ">= 1.53.0"... 2026-03-26 01:37:39.806852 | orchestrator | - Installing hashicorp/local v2.7.0... 2026-03-26 01:37:40.326828 | orchestrator | - Installed hashicorp/local v2.7.0 (signed, key ID 0C0AF313E5FD9F80) 2026-03-26 01:37:40.392647 | orchestrator | - Installing hashicorp/null v3.2.4... 2026-03-26 01:37:40.915866 | orchestrator | - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2026-03-26 01:37:41.286005 | orchestrator | - Installing terraform-provider-openstack/openstack v3.4.0... 2026-03-26 01:37:42.232778 | orchestrator | - Installed terraform-provider-openstack/openstack v3.4.0 (signed, key ID 4F80527A391BEFD2) 2026-03-26 01:37:42.232873 | orchestrator | 2026-03-26 01:37:42.232882 | orchestrator | Providers are signed by their developers. 2026-03-26 01:37:42.232887 | orchestrator | If you'd like to know more about provider signing, you can read about it here: 2026-03-26 01:37:42.232893 | orchestrator | https://opentofu.org/docs/cli/plugins/signing/ 2026-03-26 01:37:42.232900 | orchestrator | 2026-03-26 01:37:42.232905 | orchestrator | OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2026-03-26 01:37:42.232923 | orchestrator | selections it made above. Include this file in your version control repository 2026-03-26 01:37:42.232928 | orchestrator | so that OpenTofu can guarantee to make the same selections by default when 2026-03-26 01:37:42.232932 | orchestrator | you run "tofu init" in the future. 2026-03-26 01:37:42.233235 | orchestrator | 2026-03-26 01:37:42.233259 | orchestrator | OpenTofu has been successfully initialized! 2026-03-26 01:37:42.233266 | orchestrator | 2026-03-26 01:37:42.233270 | orchestrator | You may now begin working with OpenTofu. Try running "tofu plan" to see 2026-03-26 01:37:42.233274 | orchestrator | any changes that are required for your infrastructure. All OpenTofu commands 2026-03-26 01:37:42.233279 | orchestrator | should now work. 2026-03-26 01:37:42.233286 | orchestrator | 2026-03-26 01:37:42.233297 | orchestrator | If you ever set or change modules or backend configuration for OpenTofu, 2026-03-26 01:37:42.233301 | orchestrator | rerun this command to reinitialize your working directory. If you forget, other 2026-03-26 01:37:42.233309 | orchestrator | commands will detect it and remind you to do so if necessary. 2026-03-26 01:37:42.413223 | orchestrator | Created and switched to workspace "ci"! 2026-03-26 01:37:42.413344 | orchestrator | 2026-03-26 01:37:42.413372 | orchestrator | You're now on a new, empty workspace. Workspaces isolate their state, 2026-03-26 01:37:42.413391 | orchestrator | so if you run "tofu plan" OpenTofu will not see any existing state 2026-03-26 01:37:42.413413 | orchestrator | for this configuration. 2026-03-26 01:37:42.569679 | orchestrator | ci.auto.tfvars 2026-03-26 01:37:42.573662 | orchestrator | default_custom.tf 2026-03-26 01:37:43.606834 | orchestrator | data.openstack_networking_network_v2.public: Reading... 2026-03-26 01:37:44.243208 | orchestrator | data.openstack_networking_network_v2.public: Read complete after 0s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2026-03-26 01:37:44.486815 | orchestrator | 2026-03-26 01:37:44.486887 | orchestrator | OpenTofu used the selected providers to generate the following execution 2026-03-26 01:37:44.486895 | orchestrator | plan. Resource actions are indicated with the following symbols: 2026-03-26 01:37:44.486900 | orchestrator | + create 2026-03-26 01:37:44.486905 | orchestrator | <= read (data resources) 2026-03-26 01:37:44.486911 | orchestrator | 2026-03-26 01:37:44.486915 | orchestrator | OpenTofu will perform the following actions: 2026-03-26 01:37:44.486927 | orchestrator | 2026-03-26 01:37:44.486932 | orchestrator | # data.openstack_images_image_v2.image will be read during apply 2026-03-26 01:37:44.486936 | orchestrator | # (config refers to values not yet known) 2026-03-26 01:37:44.486940 | orchestrator | <= data "openstack_images_image_v2" "image" { 2026-03-26 01:37:44.486944 | orchestrator | + checksum = (known after apply) 2026-03-26 01:37:44.486948 | orchestrator | + created_at = (known after apply) 2026-03-26 01:37:44.486953 | orchestrator | + file = (known after apply) 2026-03-26 01:37:44.486957 | orchestrator | + id = (known after apply) 2026-03-26 01:37:44.486987 | orchestrator | + metadata = (known after apply) 2026-03-26 01:37:44.486991 | orchestrator | + min_disk_gb = (known after apply) 2026-03-26 01:37:44.486995 | orchestrator | + min_ram_mb = (known after apply) 2026-03-26 01:37:44.486999 | orchestrator | + most_recent = true 2026-03-26 01:37:44.487003 | orchestrator | + name = (known after apply) 2026-03-26 01:37:44.487007 | orchestrator | + protected = (known after apply) 2026-03-26 01:37:44.487011 | orchestrator | + region = (known after apply) 2026-03-26 01:37:44.487017 | orchestrator | + schema = (known after apply) 2026-03-26 01:37:44.487021 | orchestrator | + size_bytes = (known after apply) 2026-03-26 01:37:44.487025 | orchestrator | + tags = (known after apply) 2026-03-26 01:37:44.487029 | orchestrator | + updated_at = (known after apply) 2026-03-26 01:37:44.487033 | orchestrator | } 2026-03-26 01:37:44.487037 | orchestrator | 2026-03-26 01:37:44.487041 | orchestrator | # data.openstack_images_image_v2.image_node will be read during apply 2026-03-26 01:37:44.487045 | orchestrator | # (config refers to values not yet known) 2026-03-26 01:37:44.487049 | orchestrator | <= data "openstack_images_image_v2" "image_node" { 2026-03-26 01:37:44.487053 | orchestrator | + checksum = (known after apply) 2026-03-26 01:37:44.487056 | orchestrator | + created_at = (known after apply) 2026-03-26 01:37:44.487060 | orchestrator | + file = (known after apply) 2026-03-26 01:37:44.487064 | orchestrator | + id = (known after apply) 2026-03-26 01:37:44.487068 | orchestrator | + metadata = (known after apply) 2026-03-26 01:37:44.487071 | orchestrator | + min_disk_gb = (known after apply) 2026-03-26 01:37:44.487075 | orchestrator | + min_ram_mb = (known after apply) 2026-03-26 01:37:44.487079 | orchestrator | + most_recent = true 2026-03-26 01:37:44.487083 | orchestrator | + name = (known after apply) 2026-03-26 01:37:44.487087 | orchestrator | + protected = (known after apply) 2026-03-26 01:37:44.487090 | orchestrator | + region = (known after apply) 2026-03-26 01:37:44.487094 | orchestrator | + schema = (known after apply) 2026-03-26 01:37:44.487098 | orchestrator | + size_bytes = (known after apply) 2026-03-26 01:37:44.487102 | orchestrator | + tags = (known after apply) 2026-03-26 01:37:44.487106 | orchestrator | + updated_at = (known after apply) 2026-03-26 01:37:44.487109 | orchestrator | } 2026-03-26 01:37:44.487115 | orchestrator | 2026-03-26 01:37:44.487119 | orchestrator | # local_file.MANAGER_ADDRESS will be created 2026-03-26 01:37:44.487123 | orchestrator | + resource "local_file" "MANAGER_ADDRESS" { 2026-03-26 01:37:44.487127 | orchestrator | + content = (known after apply) 2026-03-26 01:37:44.487131 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-26 01:37:44.487135 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-26 01:37:44.487139 | orchestrator | + content_md5 = (known after apply) 2026-03-26 01:37:44.487143 | orchestrator | + content_sha1 = (known after apply) 2026-03-26 01:37:44.487147 | orchestrator | + content_sha256 = (known after apply) 2026-03-26 01:37:44.487150 | orchestrator | + content_sha512 = (known after apply) 2026-03-26 01:37:44.487154 | orchestrator | + directory_permission = "0777" 2026-03-26 01:37:44.487158 | orchestrator | + file_permission = "0644" 2026-03-26 01:37:44.487162 | orchestrator | + filename = ".MANAGER_ADDRESS.ci" 2026-03-26 01:37:44.487166 | orchestrator | + id = (known after apply) 2026-03-26 01:37:44.487169 | orchestrator | } 2026-03-26 01:37:44.487173 | orchestrator | 2026-03-26 01:37:44.487177 | orchestrator | # local_file.id_rsa_pub will be created 2026-03-26 01:37:44.487181 | orchestrator | + resource "local_file" "id_rsa_pub" { 2026-03-26 01:37:44.487185 | orchestrator | + content = (known after apply) 2026-03-26 01:37:44.487188 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-26 01:37:44.487192 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-26 01:37:44.487196 | orchestrator | + content_md5 = (known after apply) 2026-03-26 01:37:44.487200 | orchestrator | + content_sha1 = (known after apply) 2026-03-26 01:37:44.487204 | orchestrator | + content_sha256 = (known after apply) 2026-03-26 01:37:44.487217 | orchestrator | + content_sha512 = (known after apply) 2026-03-26 01:37:44.487221 | orchestrator | + directory_permission = "0777" 2026-03-26 01:37:44.487225 | orchestrator | + file_permission = "0644" 2026-03-26 01:37:44.487232 | orchestrator | + filename = ".id_rsa.ci.pub" 2026-03-26 01:37:44.487236 | orchestrator | + id = (known after apply) 2026-03-26 01:37:44.487240 | orchestrator | } 2026-03-26 01:37:44.487245 | orchestrator | 2026-03-26 01:37:44.487249 | orchestrator | # local_file.inventory will be created 2026-03-26 01:37:44.487253 | orchestrator | + resource "local_file" "inventory" { 2026-03-26 01:37:44.487256 | orchestrator | + content = (known after apply) 2026-03-26 01:37:44.487260 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-26 01:37:44.487264 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-26 01:37:44.487268 | orchestrator | + content_md5 = (known after apply) 2026-03-26 01:37:44.487272 | orchestrator | + content_sha1 = (known after apply) 2026-03-26 01:37:44.487276 | orchestrator | + content_sha256 = (known after apply) 2026-03-26 01:37:44.487279 | orchestrator | + content_sha512 = (known after apply) 2026-03-26 01:37:44.487283 | orchestrator | + directory_permission = "0777" 2026-03-26 01:37:44.487287 | orchestrator | + file_permission = "0644" 2026-03-26 01:37:44.487291 | orchestrator | + filename = "inventory.ci" 2026-03-26 01:37:44.487295 | orchestrator | + id = (known after apply) 2026-03-26 01:37:44.487298 | orchestrator | } 2026-03-26 01:37:44.487302 | orchestrator | 2026-03-26 01:37:44.487306 | orchestrator | # local_sensitive_file.id_rsa will be created 2026-03-26 01:37:44.487310 | orchestrator | + resource "local_sensitive_file" "id_rsa" { 2026-03-26 01:37:44.487314 | orchestrator | + content = (sensitive value) 2026-03-26 01:37:44.487317 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-26 01:37:44.487321 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-26 01:37:44.487325 | orchestrator | + content_md5 = (known after apply) 2026-03-26 01:37:44.487329 | orchestrator | + content_sha1 = (known after apply) 2026-03-26 01:37:44.487332 | orchestrator | + content_sha256 = (known after apply) 2026-03-26 01:37:44.487336 | orchestrator | + content_sha512 = (known after apply) 2026-03-26 01:37:44.487340 | orchestrator | + directory_permission = "0700" 2026-03-26 01:37:44.487344 | orchestrator | + file_permission = "0600" 2026-03-26 01:37:44.487348 | orchestrator | + filename = ".id_rsa.ci" 2026-03-26 01:37:44.487352 | orchestrator | + id = (known after apply) 2026-03-26 01:37:44.487355 | orchestrator | } 2026-03-26 01:37:44.487359 | orchestrator | 2026-03-26 01:37:44.487363 | orchestrator | # null_resource.node_semaphore will be created 2026-03-26 01:37:44.487367 | orchestrator | + resource "null_resource" "node_semaphore" { 2026-03-26 01:37:44.487370 | orchestrator | + id = (known after apply) 2026-03-26 01:37:44.487374 | orchestrator | } 2026-03-26 01:37:44.487380 | orchestrator | 2026-03-26 01:37:44.487384 | orchestrator | # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2026-03-26 01:37:44.487387 | orchestrator | + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2026-03-26 01:37:44.487391 | orchestrator | + attachment = (known after apply) 2026-03-26 01:37:44.487395 | orchestrator | + availability_zone = "nova" 2026-03-26 01:37:44.487399 | orchestrator | + id = (known after apply) 2026-03-26 01:37:44.487403 | orchestrator | + image_id = (known after apply) 2026-03-26 01:37:44.487406 | orchestrator | + metadata = (known after apply) 2026-03-26 01:37:44.487410 | orchestrator | + name = "testbed-volume-manager-base" 2026-03-26 01:37:44.487414 | orchestrator | + region = (known after apply) 2026-03-26 01:37:44.487418 | orchestrator | + size = 80 2026-03-26 01:37:44.487422 | orchestrator | + volume_retype_policy = "never" 2026-03-26 01:37:44.487426 | orchestrator | + volume_type = "ssd" 2026-03-26 01:37:44.487429 | orchestrator | } 2026-03-26 01:37:44.487433 | orchestrator | 2026-03-26 01:37:44.487437 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2026-03-26 01:37:44.487441 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-26 01:37:44.487445 | orchestrator | + attachment = (known after apply) 2026-03-26 01:37:44.487448 | orchestrator | + availability_zone = "nova" 2026-03-26 01:37:44.487452 | orchestrator | + id = (known after apply) 2026-03-26 01:37:44.487492 | orchestrator | + image_id = (known after apply) 2026-03-26 01:37:44.487496 | orchestrator | + metadata = (known after apply) 2026-03-26 01:37:44.487500 | orchestrator | + name = "testbed-volume-0-node-base" 2026-03-26 01:37:44.487504 | orchestrator | + region = (known after apply) 2026-03-26 01:37:44.487508 | orchestrator | + size = 80 2026-03-26 01:37:44.487512 | orchestrator | + volume_retype_policy = "never" 2026-03-26 01:37:44.487515 | orchestrator | + volume_type = "ssd" 2026-03-26 01:37:44.487519 | orchestrator | } 2026-03-26 01:37:44.487523 | orchestrator | 2026-03-26 01:37:44.487527 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2026-03-26 01:37:44.487530 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-26 01:37:44.487535 | orchestrator | + attachment = (known after apply) 2026-03-26 01:37:44.487541 | orchestrator | + availability_zone = "nova" 2026-03-26 01:37:44.487547 | orchestrator | + id = (known after apply) 2026-03-26 01:37:44.487553 | orchestrator | + image_id = (known after apply) 2026-03-26 01:37:44.487559 | orchestrator | + metadata = (known after apply) 2026-03-26 01:37:44.487564 | orchestrator | + name = "testbed-volume-1-node-base" 2026-03-26 01:37:44.487572 | orchestrator | + region = (known after apply) 2026-03-26 01:37:44.487581 | orchestrator | + size = 80 2026-03-26 01:37:44.487588 | orchestrator | + volume_retype_policy = "never" 2026-03-26 01:37:44.487594 | orchestrator | + volume_type = "ssd" 2026-03-26 01:37:44.487599 | orchestrator | } 2026-03-26 01:37:44.487609 | orchestrator | 2026-03-26 01:37:44.487614 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2026-03-26 01:37:44.487620 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-26 01:37:44.487626 | orchestrator | + attachment = (known after apply) 2026-03-26 01:37:44.487631 | orchestrator | + availability_zone = "nova" 2026-03-26 01:37:44.487637 | orchestrator | + id = (known after apply) 2026-03-26 01:37:44.487642 | orchestrator | + image_id = (known after apply) 2026-03-26 01:37:44.487648 | orchestrator | + metadata = (known after apply) 2026-03-26 01:37:44.487654 | orchestrator | + name = "testbed-volume-2-node-base" 2026-03-26 01:37:44.487660 | orchestrator | + region = (known after apply) 2026-03-26 01:37:44.487666 | orchestrator | + size = 80 2026-03-26 01:37:44.487676 | orchestrator | + volume_retype_policy = "never" 2026-03-26 01:37:44.487682 | orchestrator | + volume_type = "ssd" 2026-03-26 01:37:44.487688 | orchestrator | } 2026-03-26 01:37:44.487693 | orchestrator | 2026-03-26 01:37:44.487699 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2026-03-26 01:37:44.487704 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-26 01:37:44.487710 | orchestrator | + attachment = (known after apply) 2026-03-26 01:37:44.487716 | orchestrator | + availability_zone = "nova" 2026-03-26 01:37:44.487722 | orchestrator | + id = (known after apply) 2026-03-26 01:37:44.487728 | orchestrator | + image_id = (known after apply) 2026-03-26 01:37:44.487734 | orchestrator | + metadata = (known after apply) 2026-03-26 01:37:44.487739 | orchestrator | + name = "testbed-volume-3-node-base" 2026-03-26 01:37:44.487743 | orchestrator | + region = (known after apply) 2026-03-26 01:37:44.487747 | orchestrator | + size = 80 2026-03-26 01:37:44.487751 | orchestrator | + volume_retype_policy = "never" 2026-03-26 01:37:44.487755 | orchestrator | + volume_type = "ssd" 2026-03-26 01:37:44.487758 | orchestrator | } 2026-03-26 01:37:44.487762 | orchestrator | 2026-03-26 01:37:44.487766 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2026-03-26 01:37:44.487770 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-26 01:37:44.487774 | orchestrator | + attachment = (known after apply) 2026-03-26 01:37:44.487777 | orchestrator | + availability_zone = "nova" 2026-03-26 01:37:44.487781 | orchestrator | + id = (known after apply) 2026-03-26 01:37:44.487789 | orchestrator | + image_id = (known after apply) 2026-03-26 01:37:44.487793 | orchestrator | + metadata = (known after apply) 2026-03-26 01:37:44.487797 | orchestrator | + name = "testbed-volume-4-node-base" 2026-03-26 01:37:44.487801 | orchestrator | + region = (known after apply) 2026-03-26 01:37:44.487804 | orchestrator | + size = 80 2026-03-26 01:37:44.487808 | orchestrator | + volume_retype_policy = "never" 2026-03-26 01:37:44.487812 | orchestrator | + volume_type = "ssd" 2026-03-26 01:37:44.487816 | orchestrator | } 2026-03-26 01:37:44.487822 | orchestrator | 2026-03-26 01:37:44.487826 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2026-03-26 01:37:44.487830 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-26 01:37:44.487834 | orchestrator | + attachment = (known after apply) 2026-03-26 01:37:44.487838 | orchestrator | + availability_zone = "nova" 2026-03-26 01:37:44.487842 | orchestrator | + id = (known after apply) 2026-03-26 01:37:44.487846 | orchestrator | + image_id = (known after apply) 2026-03-26 01:37:44.487849 | orchestrator | + metadata = (known after apply) 2026-03-26 01:37:44.487853 | orchestrator | + name = "testbed-volume-5-node-base" 2026-03-26 01:37:44.487857 | orchestrator | + region = (known after apply) 2026-03-26 01:37:44.487861 | orchestrator | + size = 80 2026-03-26 01:37:44.487865 | orchestrator | + volume_retype_policy = "never" 2026-03-26 01:37:44.487869 | orchestrator | + volume_type = "ssd" 2026-03-26 01:37:44.487873 | orchestrator | } 2026-03-26 01:37:44.487877 | orchestrator | 2026-03-26 01:37:44.487881 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[0] will be created 2026-03-26 01:37:44.487885 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-26 01:37:44.487889 | orchestrator | + attachment = (known after apply) 2026-03-26 01:37:44.487893 | orchestrator | + availability_zone = "nova" 2026-03-26 01:37:44.487897 | orchestrator | + id = (known after apply) 2026-03-26 01:37:44.487901 | orchestrator | + metadata = (known after apply) 2026-03-26 01:37:44.487905 | orchestrator | + name = "testbed-volume-0-node-3" 2026-03-26 01:37:44.487909 | orchestrator | + region = (known after apply) 2026-03-26 01:37:44.487913 | orchestrator | + size = 20 2026-03-26 01:37:44.487917 | orchestrator | + volume_retype_policy = "never" 2026-03-26 01:37:44.487921 | orchestrator | + volume_type = "ssd" 2026-03-26 01:37:44.487925 | orchestrator | } 2026-03-26 01:37:44.487929 | orchestrator | 2026-03-26 01:37:44.487933 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[1] will be created 2026-03-26 01:37:44.487937 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-26 01:37:44.487941 | orchestrator | + attachment = (known after apply) 2026-03-26 01:37:44.487945 | orchestrator | + availability_zone = "nova" 2026-03-26 01:37:44.487949 | orchestrator | + id = (known after apply) 2026-03-26 01:37:44.487953 | orchestrator | + metadata = (known after apply) 2026-03-26 01:37:44.487957 | orchestrator | + name = "testbed-volume-1-node-4" 2026-03-26 01:37:44.487960 | orchestrator | + region = (known after apply) 2026-03-26 01:37:44.487964 | orchestrator | + size = 20 2026-03-26 01:37:44.487968 | orchestrator | + volume_retype_policy = "never" 2026-03-26 01:37:44.487972 | orchestrator | + volume_type = "ssd" 2026-03-26 01:37:44.487976 | orchestrator | } 2026-03-26 01:37:44.487980 | orchestrator | 2026-03-26 01:37:44.487984 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[2] will be created 2026-03-26 01:37:44.487988 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-26 01:37:44.487992 | orchestrator | + attachment = (known after apply) 2026-03-26 01:37:44.487996 | orchestrator | + availability_zone = "nova" 2026-03-26 01:37:44.488000 | orchestrator | + id = (known after apply) 2026-03-26 01:37:44.488004 | orchestrator | + metadata = (known after apply) 2026-03-26 01:37:44.488008 | orchestrator | + name = "testbed-volume-2-node-5" 2026-03-26 01:37:44.488012 | orchestrator | + region = (known after apply) 2026-03-26 01:37:44.488020 | orchestrator | + size = 20 2026-03-26 01:37:44.488024 | orchestrator | + volume_retype_policy = "never" 2026-03-26 01:37:44.488029 | orchestrator | + volume_type = "ssd" 2026-03-26 01:37:44.488032 | orchestrator | } 2026-03-26 01:37:44.488038 | orchestrator | 2026-03-26 01:37:44.488042 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[3] will be created 2026-03-26 01:37:44.488046 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-26 01:37:44.488050 | orchestrator | + attachment = (known after apply) 2026-03-26 01:37:44.488054 | orchestrator | + availability_zone = "nova" 2026-03-26 01:37:44.488058 | orchestrator | + id = (known after apply) 2026-03-26 01:37:44.488064 | orchestrator | + metadata = (known after apply) 2026-03-26 01:37:44.488068 | orchestrator | + name = "testbed-volume-3-node-3" 2026-03-26 01:37:44.488072 | orchestrator | + region = (known after apply) 2026-03-26 01:37:44.488076 | orchestrator | + size = 20 2026-03-26 01:37:44.488080 | orchestrator | + volume_retype_policy = "never" 2026-03-26 01:37:44.488084 | orchestrator | + volume_type = "ssd" 2026-03-26 01:37:44.488088 | orchestrator | } 2026-03-26 01:37:44.488092 | orchestrator | 2026-03-26 01:37:44.488096 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[4] will be created 2026-03-26 01:37:44.488100 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-26 01:37:44.488104 | orchestrator | + attachment = (known after apply) 2026-03-26 01:37:44.488108 | orchestrator | + availability_zone = "nova" 2026-03-26 01:37:44.488112 | orchestrator | + id = (known after apply) 2026-03-26 01:37:44.488116 | orchestrator | + metadata = (known after apply) 2026-03-26 01:37:44.488120 | orchestrator | + name = "testbed-volume-4-node-4" 2026-03-26 01:37:44.488124 | orchestrator | + region = (known after apply) 2026-03-26 01:37:44.488128 | orchestrator | + size = 20 2026-03-26 01:37:44.488132 | orchestrator | + volume_retype_policy = "never" 2026-03-26 01:37:44.488136 | orchestrator | + volume_type = "ssd" 2026-03-26 01:37:44.488140 | orchestrator | } 2026-03-26 01:37:44.488144 | orchestrator | 2026-03-26 01:37:44.488148 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[5] will be created 2026-03-26 01:37:44.488151 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-26 01:37:44.488155 | orchestrator | + attachment = (known after apply) 2026-03-26 01:37:44.488159 | orchestrator | + availability_zone = "nova" 2026-03-26 01:37:44.488163 | orchestrator | + id = (known after apply) 2026-03-26 01:37:44.488167 | orchestrator | + metadata = (known after apply) 2026-03-26 01:37:44.488171 | orchestrator | + name = "testbed-volume-5-node-5" 2026-03-26 01:37:44.488175 | orchestrator | + region = (known after apply) 2026-03-26 01:37:44.488179 | orchestrator | + size = 20 2026-03-26 01:37:44.488183 | orchestrator | + volume_retype_policy = "never" 2026-03-26 01:37:44.488187 | orchestrator | + volume_type = "ssd" 2026-03-26 01:37:44.488191 | orchestrator | } 2026-03-26 01:37:44.488195 | orchestrator | 2026-03-26 01:37:44.488199 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[6] will be created 2026-03-26 01:37:44.488203 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-26 01:37:44.488207 | orchestrator | + attachment = (known after apply) 2026-03-26 01:37:44.488211 | orchestrator | + availability_zone = "nova" 2026-03-26 01:37:44.488215 | orchestrator | + id = (known after apply) 2026-03-26 01:37:44.488219 | orchestrator | + metadata = (known after apply) 2026-03-26 01:37:44.488223 | orchestrator | + name = "testbed-volume-6-node-3" 2026-03-26 01:37:44.488227 | orchestrator | + region = (known after apply) 2026-03-26 01:37:44.488231 | orchestrator | + size = 20 2026-03-26 01:37:44.488235 | orchestrator | + volume_retype_policy = "never" 2026-03-26 01:37:44.488239 | orchestrator | + volume_type = "ssd" 2026-03-26 01:37:44.488243 | orchestrator | } 2026-03-26 01:37:44.488247 | orchestrator | 2026-03-26 01:37:44.488251 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[7] will be created 2026-03-26 01:37:44.488255 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-26 01:37:44.488262 | orchestrator | + attachment = (known after apply) 2026-03-26 01:37:44.488266 | orchestrator | + availability_zone = "nova" 2026-03-26 01:37:44.488270 | orchestrator | + id = (known after apply) 2026-03-26 01:37:44.488274 | orchestrator | + metadata = (known after apply) 2026-03-26 01:37:44.488278 | orchestrator | + name = "testbed-volume-7-node-4" 2026-03-26 01:37:44.488282 | orchestrator | + region = (known after apply) 2026-03-26 01:37:44.488286 | orchestrator | + size = 20 2026-03-26 01:37:44.488290 | orchestrator | + volume_retype_policy = "never" 2026-03-26 01:37:44.488294 | orchestrator | + volume_type = "ssd" 2026-03-26 01:37:44.488298 | orchestrator | } 2026-03-26 01:37:44.488303 | orchestrator | 2026-03-26 01:37:44.488307 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[8] will be created 2026-03-26 01:37:44.488311 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-26 01:37:44.488315 | orchestrator | + attachment = (known after apply) 2026-03-26 01:37:44.488319 | orchestrator | + availability_zone = "nova" 2026-03-26 01:37:44.488323 | orchestrator | + id = (known after apply) 2026-03-26 01:37:44.488327 | orchestrator | + metadata = (known after apply) 2026-03-26 01:37:44.488331 | orchestrator | + name = "testbed-volume-8-node-5" 2026-03-26 01:37:44.488335 | orchestrator | + region = (known after apply) 2026-03-26 01:37:44.488339 | orchestrator | + size = 20 2026-03-26 01:37:44.488343 | orchestrator | + volume_retype_policy = "never" 2026-03-26 01:37:44.488347 | orchestrator | + volume_type = "ssd" 2026-03-26 01:37:44.488351 | orchestrator | } 2026-03-26 01:37:44.488355 | orchestrator | 2026-03-26 01:37:44.488359 | orchestrator | # openstack_compute_instance_v2.manager_server will be created 2026-03-26 01:37:44.488363 | orchestrator | + resource "openstack_compute_instance_v2" "manager_server" { 2026-03-26 01:37:44.488367 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-26 01:37:44.488371 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-26 01:37:44.488375 | orchestrator | + all_metadata = (known after apply) 2026-03-26 01:37:44.488378 | orchestrator | + all_tags = (known after apply) 2026-03-26 01:37:44.488382 | orchestrator | + availability_zone = "nova" 2026-03-26 01:37:44.488386 | orchestrator | + config_drive = true 2026-03-26 01:37:44.488393 | orchestrator | + created = (known after apply) 2026-03-26 01:37:44.488397 | orchestrator | + flavor_id = (known after apply) 2026-03-26 01:37:44.488401 | orchestrator | + flavor_name = "OSISM-4V-16" 2026-03-26 01:37:44.488405 | orchestrator | + force_delete = false 2026-03-26 01:37:44.488409 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-26 01:37:44.488413 | orchestrator | + id = (known after apply) 2026-03-26 01:37:44.488417 | orchestrator | + image_id = (known after apply) 2026-03-26 01:37:44.488421 | orchestrator | + image_name = (known after apply) 2026-03-26 01:37:44.488425 | orchestrator | + key_pair = "testbed" 2026-03-26 01:37:44.488429 | orchestrator | + name = "testbed-manager" 2026-03-26 01:37:44.488433 | orchestrator | + power_state = "active" 2026-03-26 01:37:44.488437 | orchestrator | + region = (known after apply) 2026-03-26 01:37:44.488440 | orchestrator | + security_groups = (known after apply) 2026-03-26 01:37:44.488444 | orchestrator | + stop_before_destroy = false 2026-03-26 01:37:44.488448 | orchestrator | + updated = (known after apply) 2026-03-26 01:37:44.488452 | orchestrator | + user_data = (sensitive value) 2026-03-26 01:37:44.488477 | orchestrator | 2026-03-26 01:37:44.488483 | orchestrator | + block_device { 2026-03-26 01:37:44.488488 | orchestrator | + boot_index = 0 2026-03-26 01:37:44.488492 | orchestrator | + delete_on_termination = false 2026-03-26 01:37:44.488496 | orchestrator | + destination_type = "volume" 2026-03-26 01:37:44.488500 | orchestrator | + multiattach = false 2026-03-26 01:37:44.488504 | orchestrator | + source_type = "volume" 2026-03-26 01:37:44.488508 | orchestrator | + uuid = (known after apply) 2026-03-26 01:37:44.488515 | orchestrator | } 2026-03-26 01:37:44.488519 | orchestrator | 2026-03-26 01:37:44.488523 | orchestrator | + network { 2026-03-26 01:37:44.488527 | orchestrator | + access_network = false 2026-03-26 01:37:44.488532 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-26 01:37:44.488536 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-26 01:37:44.488540 | orchestrator | + mac = (known after apply) 2026-03-26 01:37:44.488544 | orchestrator | + name = (known after apply) 2026-03-26 01:37:44.488548 | orchestrator | + port = (known after apply) 2026-03-26 01:37:44.488553 | orchestrator | + uuid = (known after apply) 2026-03-26 01:37:44.488559 | orchestrator | } 2026-03-26 01:37:44.488568 | orchestrator | } 2026-03-26 01:37:44.488580 | orchestrator | 2026-03-26 01:37:44.488586 | orchestrator | # openstack_compute_instance_v2.node_server[0] will be created 2026-03-26 01:37:44.488592 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-26 01:37:44.488598 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-26 01:37:44.488604 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-26 01:37:44.488610 | orchestrator | + all_metadata = (known after apply) 2026-03-26 01:37:44.488616 | orchestrator | + all_tags = (known after apply) 2026-03-26 01:37:44.488622 | orchestrator | + availability_zone = "nova" 2026-03-26 01:37:44.488628 | orchestrator | + config_drive = true 2026-03-26 01:37:44.488634 | orchestrator | + created = (known after apply) 2026-03-26 01:37:44.488640 | orchestrator | + flavor_id = (known after apply) 2026-03-26 01:37:44.488647 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-26 01:37:44.488653 | orchestrator | + force_delete = false 2026-03-26 01:37:44.488659 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-26 01:37:44.488666 | orchestrator | + id = (known after apply) 2026-03-26 01:37:44.488673 | orchestrator | + image_id = (known after apply) 2026-03-26 01:37:44.488678 | orchestrator | + image_name = (known after apply) 2026-03-26 01:37:44.488682 | orchestrator | + key_pair = "testbed" 2026-03-26 01:37:44.488686 | orchestrator | + name = "testbed-node-0" 2026-03-26 01:37:44.488690 | orchestrator | + power_state = "active" 2026-03-26 01:37:44.488694 | orchestrator | + region = (known after apply) 2026-03-26 01:37:44.488698 | orchestrator | + security_groups = (known after apply) 2026-03-26 01:37:44.488702 | orchestrator | + stop_before_destroy = false 2026-03-26 01:37:44.488705 | orchestrator | + updated = (known after apply) 2026-03-26 01:37:44.488709 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-26 01:37:44.488713 | orchestrator | 2026-03-26 01:37:44.488717 | orchestrator | + block_device { 2026-03-26 01:37:44.488721 | orchestrator | + boot_index = 0 2026-03-26 01:37:44.488725 | orchestrator | + delete_on_termination = false 2026-03-26 01:37:44.488729 | orchestrator | + destination_type = "volume" 2026-03-26 01:37:44.488733 | orchestrator | + multiattach = false 2026-03-26 01:37:44.488737 | orchestrator | + source_type = "volume" 2026-03-26 01:37:44.488741 | orchestrator | + uuid = (known after apply) 2026-03-26 01:37:44.488745 | orchestrator | } 2026-03-26 01:37:44.488749 | orchestrator | 2026-03-26 01:37:44.488753 | orchestrator | + network { 2026-03-26 01:37:44.488757 | orchestrator | + access_network = false 2026-03-26 01:37:44.488761 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-26 01:37:44.488765 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-26 01:37:44.488769 | orchestrator | + mac = (known after apply) 2026-03-26 01:37:44.488773 | orchestrator | + name = (known after apply) 2026-03-26 01:37:44.488777 | orchestrator | + port = (known after apply) 2026-03-26 01:37:44.488780 | orchestrator | + uuid = (known after apply) 2026-03-26 01:37:44.488784 | orchestrator | } 2026-03-26 01:37:44.488788 | orchestrator | } 2026-03-26 01:37:44.488794 | orchestrator | 2026-03-26 01:37:44.488798 | orchestrator | # openstack_compute_instance_v2.node_server[1] will be created 2026-03-26 01:37:44.488802 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-26 01:37:44.488806 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-26 01:37:44.488815 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-26 01:37:44.488819 | orchestrator | + all_metadata = (known after apply) 2026-03-26 01:37:44.488823 | orchestrator | + all_tags = (known after apply) 2026-03-26 01:37:44.488827 | orchestrator | + availability_zone = "nova" 2026-03-26 01:37:44.488831 | orchestrator | + config_drive = true 2026-03-26 01:37:44.488835 | orchestrator | + created = (known after apply) 2026-03-26 01:37:44.488838 | orchestrator | + flavor_id = (known after apply) 2026-03-26 01:37:44.488842 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-26 01:37:44.488846 | orchestrator | + force_delete = false 2026-03-26 01:37:44.488850 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-26 01:37:44.488854 | orchestrator | + id = (known after apply) 2026-03-26 01:37:44.488858 | orchestrator | + image_id = (known after apply) 2026-03-26 01:37:44.488862 | orchestrator | + image_name = (known after apply) 2026-03-26 01:37:44.488866 | orchestrator | + key_pair = "testbed" 2026-03-26 01:37:44.488870 | orchestrator | + name = "testbed-node-1" 2026-03-26 01:37:44.488874 | orchestrator | + power_state = "active" 2026-03-26 01:37:44.488878 | orchestrator | + region = (known after apply) 2026-03-26 01:37:44.488882 | orchestrator | + security_groups = (known after apply) 2026-03-26 01:37:44.488885 | orchestrator | + stop_before_destroy = false 2026-03-26 01:37:44.488889 | orchestrator | + updated = (known after apply) 2026-03-26 01:37:44.488897 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-26 01:37:44.488901 | orchestrator | 2026-03-26 01:37:44.488905 | orchestrator | + block_device { 2026-03-26 01:37:44.488909 | orchestrator | + boot_index = 0 2026-03-26 01:37:44.488913 | orchestrator | + delete_on_termination = false 2026-03-26 01:37:44.488917 | orchestrator | + destination_type = "volume" 2026-03-26 01:37:44.488921 | orchestrator | + multiattach = false 2026-03-26 01:37:44.488925 | orchestrator | + source_type = "volume" 2026-03-26 01:37:44.488928 | orchestrator | + uuid = (known after apply) 2026-03-26 01:37:44.488932 | orchestrator | } 2026-03-26 01:37:44.488936 | orchestrator | 2026-03-26 01:37:44.488940 | orchestrator | + network { 2026-03-26 01:37:44.488944 | orchestrator | + access_network = false 2026-03-26 01:37:44.488948 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-26 01:37:44.488952 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-26 01:37:44.488956 | orchestrator | + mac = (known after apply) 2026-03-26 01:37:44.488960 | orchestrator | + name = (known after apply) 2026-03-26 01:37:44.488964 | orchestrator | + port = (known after apply) 2026-03-26 01:37:44.488968 | orchestrator | + uuid = (known after apply) 2026-03-26 01:37:44.488972 | orchestrator | } 2026-03-26 01:37:44.488976 | orchestrator | } 2026-03-26 01:37:44.488982 | orchestrator | 2026-03-26 01:37:44.488986 | orchestrator | # openstack_compute_instance_v2.node_server[2] will be created 2026-03-26 01:37:44.488990 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-26 01:37:44.488994 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-26 01:37:44.488998 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-26 01:37:44.489002 | orchestrator | + all_metadata = (known after apply) 2026-03-26 01:37:44.489006 | orchestrator | + all_tags = (known after apply) 2026-03-26 01:37:44.489010 | orchestrator | + availability_zone = "nova" 2026-03-26 01:37:44.489014 | orchestrator | + config_drive = true 2026-03-26 01:37:44.489018 | orchestrator | + created = (known after apply) 2026-03-26 01:37:44.489022 | orchestrator | + flavor_id = (known after apply) 2026-03-26 01:37:44.489026 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-26 01:37:44.489030 | orchestrator | + force_delete = false 2026-03-26 01:37:44.489034 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-26 01:37:44.489038 | orchestrator | + id = (known after apply) 2026-03-26 01:37:44.489041 | orchestrator | + image_id = (known after apply) 2026-03-26 01:37:44.489048 | orchestrator | + image_name = (known after apply) 2026-03-26 01:37:44.489052 | orchestrator | + key_pair = "testbed" 2026-03-26 01:37:44.489056 | orchestrator | + name = "testbed-node-2" 2026-03-26 01:37:44.489060 | orchestrator | + power_state = "active" 2026-03-26 01:37:44.489064 | orchestrator | + region = (known after apply) 2026-03-26 01:37:44.489068 | orchestrator | + security_groups = (known after apply) 2026-03-26 01:37:44.489071 | orchestrator | + stop_before_destroy = false 2026-03-26 01:37:44.489075 | orchestrator | + updated = (known after apply) 2026-03-26 01:37:44.489079 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-26 01:37:44.489083 | orchestrator | 2026-03-26 01:37:44.489087 | orchestrator | + block_device { 2026-03-26 01:37:44.489091 | orchestrator | + boot_index = 0 2026-03-26 01:37:44.489095 | orchestrator | + delete_on_termination = false 2026-03-26 01:37:44.489099 | orchestrator | + destination_type = "volume" 2026-03-26 01:37:44.489103 | orchestrator | + multiattach = false 2026-03-26 01:37:44.489107 | orchestrator | + source_type = "volume" 2026-03-26 01:37:44.489111 | orchestrator | + uuid = (known after apply) 2026-03-26 01:37:44.489115 | orchestrator | } 2026-03-26 01:37:44.489119 | orchestrator | 2026-03-26 01:37:44.489123 | orchestrator | + network { 2026-03-26 01:37:44.489127 | orchestrator | + access_network = false 2026-03-26 01:37:44.489131 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-26 01:37:44.489135 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-26 01:37:44.489138 | orchestrator | + mac = (known after apply) 2026-03-26 01:37:44.489142 | orchestrator | + name = (known after apply) 2026-03-26 01:37:44.489146 | orchestrator | + port = (known after apply) 2026-03-26 01:37:44.489150 | orchestrator | + uuid = (known after apply) 2026-03-26 01:37:44.489154 | orchestrator | } 2026-03-26 01:37:44.489158 | orchestrator | } 2026-03-26 01:37:44.489162 | orchestrator | 2026-03-26 01:37:44.489171 | orchestrator | # openstack_compute_instance_v2.node_server[3] will be created 2026-03-26 01:37:44.489175 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-26 01:37:44.489179 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-26 01:37:44.489183 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-26 01:37:44.489187 | orchestrator | + all_metadata = (known after apply) 2026-03-26 01:37:44.489191 | orchestrator | + all_tags = (known after apply) 2026-03-26 01:37:44.489195 | orchestrator | + availability_zone = "nova" 2026-03-26 01:37:44.489199 | orchestrator | + config_drive = true 2026-03-26 01:37:44.489202 | orchestrator | + created = (known after apply) 2026-03-26 01:37:44.489206 | orchestrator | + flavor_id = (known after apply) 2026-03-26 01:37:44.489210 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-26 01:37:44.489214 | orchestrator | + force_delete = false 2026-03-26 01:37:44.489218 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-26 01:37:44.489222 | orchestrator | + id = (known after apply) 2026-03-26 01:37:44.489226 | orchestrator | + image_id = (known after apply) 2026-03-26 01:37:44.489230 | orchestrator | + image_name = (known after apply) 2026-03-26 01:37:44.489234 | orchestrator | + key_pair = "testbed" 2026-03-26 01:37:44.489238 | orchestrator | + name = "testbed-node-3" 2026-03-26 01:37:44.489242 | orchestrator | + power_state = "active" 2026-03-26 01:37:44.489245 | orchestrator | + region = (known after apply) 2026-03-26 01:37:44.489249 | orchestrator | + security_groups = (known after apply) 2026-03-26 01:37:44.489253 | orchestrator | + stop_before_destroy = false 2026-03-26 01:37:44.489257 | orchestrator | + updated = (known after apply) 2026-03-26 01:37:44.489261 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-26 01:37:44.489265 | orchestrator | 2026-03-26 01:37:44.489269 | orchestrator | + block_device { 2026-03-26 01:37:44.489273 | orchestrator | + boot_index = 0 2026-03-26 01:37:44.489277 | orchestrator | + delete_on_termination = false 2026-03-26 01:37:44.489281 | orchestrator | + destination_type = "volume" 2026-03-26 01:37:44.489288 | orchestrator | + multiattach = false 2026-03-26 01:37:44.489292 | orchestrator | + source_type = "volume" 2026-03-26 01:37:44.489296 | orchestrator | + uuid = (known after apply) 2026-03-26 01:37:44.489300 | orchestrator | } 2026-03-26 01:37:44.489304 | orchestrator | 2026-03-26 01:37:44.489308 | orchestrator | + network { 2026-03-26 01:37:44.489311 | orchestrator | + access_network = false 2026-03-26 01:37:44.489315 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-26 01:37:44.489319 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-26 01:37:44.489323 | orchestrator | + mac = (known after apply) 2026-03-26 01:37:44.489327 | orchestrator | + name = (known after apply) 2026-03-26 01:37:44.489331 | orchestrator | + port = (known after apply) 2026-03-26 01:37:44.489335 | orchestrator | + uuid = (known after apply) 2026-03-26 01:37:44.489339 | orchestrator | } 2026-03-26 01:37:44.489343 | orchestrator | } 2026-03-26 01:37:44.489349 | orchestrator | 2026-03-26 01:37:44.489353 | orchestrator | # openstack_compute_instance_v2.node_server[4] will be created 2026-03-26 01:37:44.489357 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-26 01:37:44.489361 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-26 01:37:44.489365 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-26 01:37:44.489369 | orchestrator | + all_metadata = (known after apply) 2026-03-26 01:37:44.489372 | orchestrator | + all_tags = (known after apply) 2026-03-26 01:37:44.489376 | orchestrator | + availability_zone = "nova" 2026-03-26 01:37:44.489380 | orchestrator | + config_drive = true 2026-03-26 01:37:44.489384 | orchestrator | + created = (known after apply) 2026-03-26 01:37:44.489388 | orchestrator | + flavor_id = (known after apply) 2026-03-26 01:37:44.489392 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-26 01:37:44.489396 | orchestrator | + force_delete = false 2026-03-26 01:37:44.489400 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-26 01:37:44.489404 | orchestrator | + id = (known after apply) 2026-03-26 01:37:44.489408 | orchestrator | + image_id = (known after apply) 2026-03-26 01:37:44.489411 | orchestrator | + image_name = (known after apply) 2026-03-26 01:37:44.489415 | orchestrator | + key_pair = "testbed" 2026-03-26 01:37:44.489419 | orchestrator | + name = "testbed-node-4" 2026-03-26 01:37:44.489423 | orchestrator | + power_state = "active" 2026-03-26 01:37:44.489427 | orchestrator | + region = (known after apply) 2026-03-26 01:37:44.489431 | orchestrator | + security_groups = (known after apply) 2026-03-26 01:37:44.489435 | orchestrator | + stop_before_destroy = false 2026-03-26 01:37:44.489439 | orchestrator | + updated = (known after apply) 2026-03-26 01:37:44.489443 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-26 01:37:44.489447 | orchestrator | 2026-03-26 01:37:44.489451 | orchestrator | + block_device { 2026-03-26 01:37:44.489470 | orchestrator | + boot_index = 0 2026-03-26 01:37:44.489474 | orchestrator | + delete_on_termination = false 2026-03-26 01:37:44.489478 | orchestrator | + destination_type = "volume" 2026-03-26 01:37:44.489485 | orchestrator | + multiattach = false 2026-03-26 01:37:44.489491 | orchestrator | + source_type = "volume" 2026-03-26 01:37:44.489498 | orchestrator | + uuid = (known after apply) 2026-03-26 01:37:44.489505 | orchestrator | } 2026-03-26 01:37:44.489511 | orchestrator | 2026-03-26 01:37:44.489518 | orchestrator | + network { 2026-03-26 01:37:44.489524 | orchestrator | + access_network = false 2026-03-26 01:37:44.489531 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-26 01:37:44.489535 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-26 01:37:44.489539 | orchestrator | + mac = (known after apply) 2026-03-26 01:37:44.489543 | orchestrator | + name = (known after apply) 2026-03-26 01:37:44.489547 | orchestrator | + port = (known after apply) 2026-03-26 01:37:44.489551 | orchestrator | + uuid = (known after apply) 2026-03-26 01:37:44.489555 | orchestrator | } 2026-03-26 01:37:44.489559 | orchestrator | } 2026-03-26 01:37:44.489567 | orchestrator | 2026-03-26 01:37:44.489571 | orchestrator | # openstack_compute_instance_v2.node_server[5] will be created 2026-03-26 01:37:44.489575 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-26 01:37:44.489579 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-26 01:37:44.489583 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-26 01:37:44.489587 | orchestrator | + all_metadata = (known after apply) 2026-03-26 01:37:44.489591 | orchestrator | + all_tags = (known after apply) 2026-03-26 01:37:44.489595 | orchestrator | + availability_zone = "nova" 2026-03-26 01:37:44.489599 | orchestrator | + config_drive = true 2026-03-26 01:37:44.489603 | orchestrator | + created = (known after apply) 2026-03-26 01:37:44.489607 | orchestrator | + flavor_id = (known after apply) 2026-03-26 01:37:44.489611 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-26 01:37:44.489615 | orchestrator | + force_delete = false 2026-03-26 01:37:44.489618 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-26 01:37:44.489622 | orchestrator | + id = (known after apply) 2026-03-26 01:37:44.489626 | orchestrator | + image_id = (known after apply) 2026-03-26 01:37:44.489630 | orchestrator | + image_name = (known after apply) 2026-03-26 01:37:44.489634 | orchestrator | + key_pair = "testbed" 2026-03-26 01:37:44.489638 | orchestrator | + name = "testbed-node-5" 2026-03-26 01:37:44.489642 | orchestrator | + power_state = "active" 2026-03-26 01:37:44.489646 | orchestrator | + region = (known after apply) 2026-03-26 01:37:44.489650 | orchestrator | + security_groups = (known after apply) 2026-03-26 01:37:44.489655 | orchestrator | + stop_before_destroy = false 2026-03-26 01:37:44.489661 | orchestrator | + updated = (known after apply) 2026-03-26 01:37:44.489666 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-26 01:37:44.489672 | orchestrator | 2026-03-26 01:37:44.489678 | orchestrator | + block_device { 2026-03-26 01:37:44.489684 | orchestrator | + boot_index = 0 2026-03-26 01:37:44.489691 | orchestrator | + delete_on_termination = false 2026-03-26 01:37:44.489697 | orchestrator | + destination_type = "volume" 2026-03-26 01:37:44.489703 | orchestrator | + multiattach = false 2026-03-26 01:37:44.489710 | orchestrator | + source_type = "volume" 2026-03-26 01:37:44.489715 | orchestrator | + uuid = (known after apply) 2026-03-26 01:37:44.489719 | orchestrator | } 2026-03-26 01:37:44.489722 | orchestrator | 2026-03-26 01:37:44.489727 | orchestrator | + network { 2026-03-26 01:37:44.489731 | orchestrator | + access_network = false 2026-03-26 01:37:44.489734 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-26 01:37:44.489738 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-26 01:37:44.489742 | orchestrator | + mac = (known after apply) 2026-03-26 01:37:44.489746 | orchestrator | + name = (known after apply) 2026-03-26 01:37:44.489750 | orchestrator | + port = (known after apply) 2026-03-26 01:37:44.489754 | orchestrator | + uuid = (known after apply) 2026-03-26 01:37:44.489758 | orchestrator | } 2026-03-26 01:37:44.489762 | orchestrator | } 2026-03-26 01:37:44.489766 | orchestrator | 2026-03-26 01:37:44.489770 | orchestrator | # openstack_compute_keypair_v2.key will be created 2026-03-26 01:37:44.489774 | orchestrator | + resource "openstack_compute_keypair_v2" "key" { 2026-03-26 01:37:44.489778 | orchestrator | + fingerprint = (known after apply) 2026-03-26 01:37:44.489782 | orchestrator | + id = (known after apply) 2026-03-26 01:37:44.489786 | orchestrator | + name = "testbed" 2026-03-26 01:37:44.489792 | orchestrator | + private_key = (sensitive value) 2026-03-26 01:37:44.489798 | orchestrator | + public_key = (known after apply) 2026-03-26 01:37:44.489805 | orchestrator | + region = (known after apply) 2026-03-26 01:37:44.489811 | orchestrator | + user_id = (known after apply) 2026-03-26 01:37:44.489818 | orchestrator | } 2026-03-26 01:37:44.489827 | orchestrator | 2026-03-26 01:37:44.489835 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2026-03-26 01:37:44.489839 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-26 01:37:44.489847 | orchestrator | + device = (known after apply) 2026-03-26 01:37:44.489851 | orchestrator | + id = (known after apply) 2026-03-26 01:37:44.489855 | orchestrator | + instance_id = (known after apply) 2026-03-26 01:37:44.489859 | orchestrator | + region = (known after apply) 2026-03-26 01:37:44.489866 | orchestrator | + volume_id = (known after apply) 2026-03-26 01:37:44.489870 | orchestrator | } 2026-03-26 01:37:44.489874 | orchestrator | 2026-03-26 01:37:44.489878 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2026-03-26 01:37:44.489882 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-26 01:37:44.489886 | orchestrator | + device = (known after apply) 2026-03-26 01:37:44.489890 | orchestrator | + id = (known after apply) 2026-03-26 01:37:44.489894 | orchestrator | + instance_id = (known after apply) 2026-03-26 01:37:44.489897 | orchestrator | + region = (known after apply) 2026-03-26 01:37:44.489901 | orchestrator | + volume_id = (known after apply) 2026-03-26 01:37:44.489905 | orchestrator | } 2026-03-26 01:37:44.489909 | orchestrator | 2026-03-26 01:37:44.489913 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2026-03-26 01:37:44.489917 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-26 01:37:44.489921 | orchestrator | + device = (known after apply) 2026-03-26 01:37:44.489925 | orchestrator | + id = (known after apply) 2026-03-26 01:37:44.489928 | orchestrator | + instance_id = (known after apply) 2026-03-26 01:37:44.489932 | orchestrator | + region = (known after apply) 2026-03-26 01:37:44.489936 | orchestrator | + volume_id = (known after apply) 2026-03-26 01:37:44.489940 | orchestrator | } 2026-03-26 01:37:44.489944 | orchestrator | 2026-03-26 01:37:44.489948 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2026-03-26 01:37:44.489952 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-26 01:37:44.489956 | orchestrator | + device = (known after apply) 2026-03-26 01:37:44.489960 | orchestrator | + id = (known after apply) 2026-03-26 01:37:44.489964 | orchestrator | + instance_id = (known after apply) 2026-03-26 01:37:44.489968 | orchestrator | + region = (known after apply) 2026-03-26 01:37:44.489971 | orchestrator | + volume_id = (known after apply) 2026-03-26 01:37:44.489975 | orchestrator | } 2026-03-26 01:37:44.489979 | orchestrator | 2026-03-26 01:37:44.489983 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2026-03-26 01:37:44.489987 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-26 01:37:44.489991 | orchestrator | + device = (known after apply) 2026-03-26 01:37:44.489995 | orchestrator | + id = (known after apply) 2026-03-26 01:37:44.489999 | orchestrator | + instance_id = (known after apply) 2026-03-26 01:37:44.490003 | orchestrator | + region = (known after apply) 2026-03-26 01:37:44.490007 | orchestrator | + volume_id = (known after apply) 2026-03-26 01:37:44.490010 | orchestrator | } 2026-03-26 01:37:44.490036 | orchestrator | 2026-03-26 01:37:44.490041 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2026-03-26 01:37:44.490045 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-26 01:37:44.490049 | orchestrator | + device = (known after apply) 2026-03-26 01:37:44.490053 | orchestrator | + id = (known after apply) 2026-03-26 01:37:44.490056 | orchestrator | + instance_id = (known after apply) 2026-03-26 01:37:44.490060 | orchestrator | + region = (known after apply) 2026-03-26 01:37:44.490064 | orchestrator | + volume_id = (known after apply) 2026-03-26 01:37:44.490068 | orchestrator | } 2026-03-26 01:37:44.490072 | orchestrator | 2026-03-26 01:37:44.490076 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2026-03-26 01:37:44.490080 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-26 01:37:44.490084 | orchestrator | + device = (known after apply) 2026-03-26 01:37:44.490088 | orchestrator | + id = (known after apply) 2026-03-26 01:37:44.490092 | orchestrator | + instance_id = (known after apply) 2026-03-26 01:37:44.490096 | orchestrator | + region = (known after apply) 2026-03-26 01:37:44.490104 | orchestrator | + volume_id = (known after apply) 2026-03-26 01:37:44.490108 | orchestrator | } 2026-03-26 01:37:44.490112 | orchestrator | 2026-03-26 01:37:44.490116 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2026-03-26 01:37:44.490120 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-26 01:37:44.490124 | orchestrator | + device = (known after apply) 2026-03-26 01:37:44.490128 | orchestrator | + id = (known after apply) 2026-03-26 01:37:44.490132 | orchestrator | + instance_id = (known after apply) 2026-03-26 01:37:44.490136 | orchestrator | + region = (known after apply) 2026-03-26 01:37:44.490140 | orchestrator | + volume_id = (known after apply) 2026-03-26 01:37:44.490144 | orchestrator | } 2026-03-26 01:37:44.490148 | orchestrator | 2026-03-26 01:37:44.490152 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2026-03-26 01:37:44.490156 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-26 01:37:44.490159 | orchestrator | + device = (known after apply) 2026-03-26 01:37:44.490163 | orchestrator | + id = (known after apply) 2026-03-26 01:37:44.490167 | orchestrator | + instance_id = (known after apply) 2026-03-26 01:37:44.490171 | orchestrator | + region = (known after apply) 2026-03-26 01:37:44.490175 | orchestrator | + volume_id = (known after apply) 2026-03-26 01:37:44.490179 | orchestrator | } 2026-03-26 01:37:44.490183 | orchestrator | 2026-03-26 01:37:44.490187 | orchestrator | # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2026-03-26 01:37:44.490192 | orchestrator | + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2026-03-26 01:37:44.490196 | orchestrator | + fixed_ip = (known after apply) 2026-03-26 01:37:44.490200 | orchestrator | + floating_ip = (known after apply) 2026-03-26 01:37:44.490204 | orchestrator | + id = (known after apply) 2026-03-26 01:37:44.490208 | orchestrator | + port_id = (known after apply) 2026-03-26 01:37:44.490212 | orchestrator | + region = (known after apply) 2026-03-26 01:37:44.490216 | orchestrator | } 2026-03-26 01:37:44.490219 | orchestrator | 2026-03-26 01:37:44.490223 | orchestrator | # openstack_networking_floatingip_v2.manager_floating_ip will be created 2026-03-26 01:37:44.490227 | orchestrator | + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2026-03-26 01:37:44.490231 | orchestrator | + address = (known after apply) 2026-03-26 01:37:44.490235 | orchestrator | + all_tags = (known after apply) 2026-03-26 01:37:44.490242 | orchestrator | + dns_domain = (known after apply) 2026-03-26 01:37:44.490250 | orchestrator | + dns_name = (known after apply) 2026-03-26 01:37:44.490254 | orchestrator | + fixed_ip = (known after apply) 2026-03-26 01:37:44.490258 | orchestrator | + id = (known after apply) 2026-03-26 01:37:44.490262 | orchestrator | + pool = "public" 2026-03-26 01:37:44.490266 | orchestrator | + port_id = (known after apply) 2026-03-26 01:37:44.490270 | orchestrator | + region = (known after apply) 2026-03-26 01:37:44.490274 | orchestrator | + subnet_id = (known after apply) 2026-03-26 01:37:44.490278 | orchestrator | + tenant_id = (known after apply) 2026-03-26 01:37:44.490282 | orchestrator | } 2026-03-26 01:37:44.490286 | orchestrator | 2026-03-26 01:37:44.490290 | orchestrator | # openstack_networking_network_v2.net_management will be created 2026-03-26 01:37:44.490295 | orchestrator | + resource "openstack_networking_network_v2" "net_management" { 2026-03-26 01:37:44.490302 | orchestrator | + admin_state_up = (known after apply) 2026-03-26 01:37:44.490307 | orchestrator | + all_tags = (known after apply) 2026-03-26 01:37:44.490314 | orchestrator | + availability_zone_hints = [ 2026-03-26 01:37:44.490320 | orchestrator | + "nova", 2026-03-26 01:37:44.490326 | orchestrator | ] 2026-03-26 01:37:44.490332 | orchestrator | + dns_domain = (known after apply) 2026-03-26 01:37:44.490338 | orchestrator | + external = (known after apply) 2026-03-26 01:37:44.490344 | orchestrator | + id = (known after apply) 2026-03-26 01:37:44.490350 | orchestrator | + mtu = (known after apply) 2026-03-26 01:37:44.490355 | orchestrator | + name = "net-testbed-management" 2026-03-26 01:37:44.490361 | orchestrator | + port_security_enabled = (known after apply) 2026-03-26 01:37:44.490373 | orchestrator | + qos_policy_id = (known after apply) 2026-03-26 01:37:44.490380 | orchestrator | + region = (known after apply) 2026-03-26 01:37:44.490386 | orchestrator | + shared = (known after apply) 2026-03-26 01:37:44.490392 | orchestrator | + tenant_id = (known after apply) 2026-03-26 01:37:44.490398 | orchestrator | + transparent_vlan = (known after apply) 2026-03-26 01:37:44.490404 | orchestrator | 2026-03-26 01:37:44.490410 | orchestrator | + segments (known after apply) 2026-03-26 01:37:44.490415 | orchestrator | } 2026-03-26 01:37:44.490421 | orchestrator | 2026-03-26 01:37:44.490428 | orchestrator | # openstack_networking_port_v2.manager_port_management will be created 2026-03-26 01:37:44.490434 | orchestrator | + resource "openstack_networking_port_v2" "manager_port_management" { 2026-03-26 01:37:44.490441 | orchestrator | + admin_state_up = (known after apply) 2026-03-26 01:37:44.490447 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-26 01:37:44.490495 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-26 01:37:44.490505 | orchestrator | + all_tags = (known after apply) 2026-03-26 01:37:44.490513 | orchestrator | + device_id = (known after apply) 2026-03-26 01:37:44.490519 | orchestrator | + device_owner = (known after apply) 2026-03-26 01:37:44.490525 | orchestrator | + dns_assignment = (known after apply) 2026-03-26 01:37:44.490532 | orchestrator | + dns_name = (known after apply) 2026-03-26 01:37:44.490538 | orchestrator | + id = (known after apply) 2026-03-26 01:37:44.490545 | orchestrator | + mac_address = (known after apply) 2026-03-26 01:37:44.490552 | orchestrator | + network_id = (known after apply) 2026-03-26 01:37:44.490559 | orchestrator | + port_security_enabled = (known after apply) 2026-03-26 01:37:44.490566 | orchestrator | + qos_policy_id = (known after apply) 2026-03-26 01:37:44.490572 | orchestrator | + region = (known after apply) 2026-03-26 01:37:44.490579 | orchestrator | + security_group_ids = (known after apply) 2026-03-26 01:37:44.490585 | orchestrator | + tenant_id = (known after apply) 2026-03-26 01:37:44.490592 | orchestrator | 2026-03-26 01:37:44.490599 | orchestrator | + allowed_address_pairs { 2026-03-26 01:37:44.490606 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-26 01:37:44.490613 | orchestrator | } 2026-03-26 01:37:44.490621 | orchestrator | 2026-03-26 01:37:44.490628 | orchestrator | + binding (known after apply) 2026-03-26 01:37:44.490635 | orchestrator | 2026-03-26 01:37:44.490641 | orchestrator | + fixed_ip { 2026-03-26 01:37:44.490647 | orchestrator | + ip_address = "192.168.16.5" 2026-03-26 01:37:44.490653 | orchestrator | + subnet_id = (known after apply) 2026-03-26 01:37:44.490660 | orchestrator | } 2026-03-26 01:37:44.490666 | orchestrator | } 2026-03-26 01:37:44.490672 | orchestrator | 2026-03-26 01:37:44.490679 | orchestrator | # openstack_networking_port_v2.node_port_management[0] will be created 2026-03-26 01:37:44.490686 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-26 01:37:44.490692 | orchestrator | + admin_state_up = (known after apply) 2026-03-26 01:37:44.490698 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-26 01:37:44.490705 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-26 01:37:44.490711 | orchestrator | + all_tags = (known after apply) 2026-03-26 01:37:44.490717 | orchestrator | + device_id = (known after apply) 2026-03-26 01:37:44.490724 | orchestrator | + device_owner = (known after apply) 2026-03-26 01:37:44.490728 | orchestrator | + dns_assignment = (known after apply) 2026-03-26 01:37:44.490732 | orchestrator | + dns_name = (known after apply) 2026-03-26 01:37:44.490736 | orchestrator | + id = (known after apply) 2026-03-26 01:37:44.490740 | orchestrator | + mac_address = (known after apply) 2026-03-26 01:37:44.490744 | orchestrator | + network_id = (known after apply) 2026-03-26 01:37:44.490748 | orchestrator | + port_security_enabled = (known after apply) 2026-03-26 01:37:44.490752 | orchestrator | + qos_policy_id = (known after apply) 2026-03-26 01:37:44.490756 | orchestrator | + region = (known after apply) 2026-03-26 01:37:44.490766 | orchestrator | + security_group_ids = (known after apply) 2026-03-26 01:37:44.490770 | orchestrator | + tenant_id = (known after apply) 2026-03-26 01:37:44.490774 | orchestrator | 2026-03-26 01:37:44.490778 | orchestrator | + allowed_address_pairs { 2026-03-26 01:37:44.490781 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-26 01:37:44.490785 | orchestrator | } 2026-03-26 01:37:44.490789 | orchestrator | + allowed_address_pairs { 2026-03-26 01:37:44.490793 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-26 01:37:44.490797 | orchestrator | } 2026-03-26 01:37:44.490801 | orchestrator | + allowed_address_pairs { 2026-03-26 01:37:44.490805 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-26 01:37:44.490809 | orchestrator | } 2026-03-26 01:37:44.490813 | orchestrator | 2026-03-26 01:37:44.490817 | orchestrator | + binding (known after apply) 2026-03-26 01:37:44.490821 | orchestrator | 2026-03-26 01:37:44.490825 | orchestrator | + fixed_ip { 2026-03-26 01:37:44.490829 | orchestrator | + ip_address = "192.168.16.10" 2026-03-26 01:37:44.490833 | orchestrator | + subnet_id = (known after apply) 2026-03-26 01:37:44.490837 | orchestrator | } 2026-03-26 01:37:44.490841 | orchestrator | } 2026-03-26 01:37:44.490845 | orchestrator | 2026-03-26 01:37:44.490849 | orchestrator | # openstack_networking_port_v2.node_port_management[1] will be created 2026-03-26 01:37:44.490853 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-26 01:37:44.490862 | orchestrator | + admin_state_up = (known after apply) 2026-03-26 01:37:44.490874 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-26 01:37:44.490878 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-26 01:37:44.490882 | orchestrator | + all_tags = (known after apply) 2026-03-26 01:37:44.490886 | orchestrator | + device_id = (known after apply) 2026-03-26 01:37:44.490890 | orchestrator | + device_owner = (known after apply) 2026-03-26 01:37:44.490894 | orchestrator | + dns_assignment = (known after apply) 2026-03-26 01:37:44.490898 | orchestrator | + dns_name = (known after apply) 2026-03-26 01:37:44.490902 | orchestrator | + id = (known after apply) 2026-03-26 01:37:44.490906 | orchestrator | + mac_address = (known after apply) 2026-03-26 01:37:44.490910 | orchestrator | + network_id = (known after apply) 2026-03-26 01:37:44.490914 | orchestrator | + port_security_enabled = (known after apply) 2026-03-26 01:37:44.490918 | orchestrator | + qos_policy_id = (known after apply) 2026-03-26 01:37:44.490922 | orchestrator | + region = (known after apply) 2026-03-26 01:37:44.490926 | orchestrator | + security_group_ids = (known after apply) 2026-03-26 01:37:44.490930 | orchestrator | + tenant_id = (known after apply) 2026-03-26 01:37:44.490934 | orchestrator | 2026-03-26 01:37:44.490938 | orchestrator | + allowed_address_pairs { 2026-03-26 01:37:44.490942 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-26 01:37:44.490946 | orchestrator | } 2026-03-26 01:37:44.490950 | orchestrator | + allowed_address_pairs { 2026-03-26 01:37:44.490954 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-26 01:37:44.490958 | orchestrator | } 2026-03-26 01:37:44.490962 | orchestrator | + allowed_address_pairs { 2026-03-26 01:37:44.490966 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-26 01:37:44.490970 | orchestrator | } 2026-03-26 01:37:44.490974 | orchestrator | 2026-03-26 01:37:44.490978 | orchestrator | + binding (known after apply) 2026-03-26 01:37:44.490982 | orchestrator | 2026-03-26 01:37:44.490986 | orchestrator | + fixed_ip { 2026-03-26 01:37:44.490990 | orchestrator | + ip_address = "192.168.16.11" 2026-03-26 01:37:44.490994 | orchestrator | + subnet_id = (known after apply) 2026-03-26 01:37:44.490998 | orchestrator | } 2026-03-26 01:37:44.491002 | orchestrator | } 2026-03-26 01:37:44.491006 | orchestrator | 2026-03-26 01:37:44.491010 | orchestrator | # openstack_networking_port_v2.node_port_management[2] will be created 2026-03-26 01:37:44.491014 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-26 01:37:44.491018 | orchestrator | + admin_state_up = (known after apply) 2026-03-26 01:37:44.491022 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-26 01:37:44.491026 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-26 01:37:44.491030 | orchestrator | + all_tags = (known after apply) 2026-03-26 01:37:44.491037 | orchestrator | + device_id = (known after apply) 2026-03-26 01:37:44.491041 | orchestrator | + device_owner = (known after apply) 2026-03-26 01:37:44.491045 | orchestrator | + dns_assignment = (known after apply) 2026-03-26 01:37:44.491049 | orchestrator | + dns_name = (known after apply) 2026-03-26 01:37:44.491053 | orchestrator | + id = (known after apply) 2026-03-26 01:37:44.491057 | orchestrator | + mac_address = (known after apply) 2026-03-26 01:37:44.491061 | orchestrator | + network_id = (known after apply) 2026-03-26 01:37:44.491065 | orchestrator | + port_security_enabled = (known after apply) 2026-03-26 01:37:44.491069 | orchestrator | + qos_policy_id = (known after apply) 2026-03-26 01:37:44.491073 | orchestrator | + region = (known after apply) 2026-03-26 01:37:44.491078 | orchestrator | + security_group_ids = (known after apply) 2026-03-26 01:37:44.491084 | orchestrator | + tenant_id = (known after apply) 2026-03-26 01:37:44.491091 | orchestrator | 2026-03-26 01:37:44.491098 | orchestrator | + allowed_address_pairs { 2026-03-26 01:37:44.491104 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-26 01:37:44.491110 | orchestrator | } 2026-03-26 01:37:44.491116 | orchestrator | + allowed_address_pairs { 2026-03-26 01:37:44.491124 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-26 01:37:44.491128 | orchestrator | } 2026-03-26 01:37:44.491132 | orchestrator | + allowed_address_pairs { 2026-03-26 01:37:44.491135 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-26 01:37:44.491139 | orchestrator | } 2026-03-26 01:37:44.491143 | orchestrator | 2026-03-26 01:37:44.491150 | orchestrator | + binding (known after apply) 2026-03-26 01:37:44.491157 | orchestrator | 2026-03-26 01:37:44.491163 | orchestrator | + fixed_ip { 2026-03-26 01:37:44.491170 | orchestrator | + ip_address = "192.168.16.12" 2026-03-26 01:37:44.491176 | orchestrator | + subnet_id = (known after apply) 2026-03-26 01:37:44.491183 | orchestrator | } 2026-03-26 01:37:44.491188 | orchestrator | } 2026-03-26 01:37:44.491192 | orchestrator | 2026-03-26 01:37:44.491196 | orchestrator | # openstack_networking_port_v2.node_port_management[3] will be created 2026-03-26 01:37:44.491200 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-26 01:37:44.491204 | orchestrator | + admin_state_up = (known after apply) 2026-03-26 01:37:44.491208 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-26 01:37:44.491212 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-26 01:37:44.491216 | orchestrator | + all_tags = (known after apply) 2026-03-26 01:37:44.491220 | orchestrator | + device_id = (known after apply) 2026-03-26 01:37:44.491224 | orchestrator | + device_owner = (known after apply) 2026-03-26 01:37:44.491228 | orchestrator | + dns_assignment = (known after apply) 2026-03-26 01:37:44.491232 | orchestrator | + dns_name = (known after apply) 2026-03-26 01:37:44.491236 | orchestrator | + id = (known after apply) 2026-03-26 01:37:44.491239 | orchestrator | + mac_address = (known after apply) 2026-03-26 01:37:44.491244 | orchestrator | + network_id = (known after apply) 2026-03-26 01:37:44.491247 | orchestrator | + port_security_enabled = (known after apply) 2026-03-26 01:37:44.491251 | orchestrator | + qos_policy_id = (known after apply) 2026-03-26 01:37:44.491255 | orchestrator | + region = (known after apply) 2026-03-26 01:37:44.491259 | orchestrator | + security_group_ids = (known after apply) 2026-03-26 01:37:44.491263 | orchestrator | + tenant_id = (known after apply) 2026-03-26 01:37:44.491267 | orchestrator | 2026-03-26 01:37:44.491271 | orchestrator | + allowed_address_pairs { 2026-03-26 01:37:44.491275 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-26 01:37:44.491279 | orchestrator | } 2026-03-26 01:37:44.491283 | orchestrator | + allowed_address_pairs { 2026-03-26 01:37:44.491286 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-26 01:37:44.491290 | orchestrator | } 2026-03-26 01:37:44.491294 | orchestrator | + allowed_address_pairs { 2026-03-26 01:37:44.491298 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-26 01:37:44.491302 | orchestrator | } 2026-03-26 01:37:44.491307 | orchestrator | 2026-03-26 01:37:44.491318 | orchestrator | + binding (known after apply) 2026-03-26 01:37:44.491324 | orchestrator | 2026-03-26 01:37:44.491331 | orchestrator | + fixed_ip { 2026-03-26 01:37:44.491337 | orchestrator | + ip_address = "192.168.16.13" 2026-03-26 01:37:44.491344 | orchestrator | + subnet_id = (known after apply) 2026-03-26 01:37:44.491350 | orchestrator | } 2026-03-26 01:37:44.491357 | orchestrator | } 2026-03-26 01:37:44.491363 | orchestrator | 2026-03-26 01:37:44.491370 | orchestrator | # openstack_networking_port_v2.node_port_management[4] will be created 2026-03-26 01:37:44.491381 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-26 01:37:44.491388 | orchestrator | + admin_state_up = (known after apply) 2026-03-26 01:37:44.491394 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-26 01:37:44.491400 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-26 01:37:44.491404 | orchestrator | + all_tags = (known after apply) 2026-03-26 01:37:44.491408 | orchestrator | + device_id = (known after apply) 2026-03-26 01:37:44.491412 | orchestrator | + device_owner = (known after apply) 2026-03-26 01:37:44.491416 | orchestrator | + dns_assignment = (known after apply) 2026-03-26 01:37:44.491420 | orchestrator | + dns_name = (known after apply) 2026-03-26 01:37:44.491428 | orchestrator | + id = (known after apply) 2026-03-26 01:37:44.491432 | orchestrator | + mac_address = (known after apply) 2026-03-26 01:37:44.491436 | orchestrator | + network_id = (known after apply) 2026-03-26 01:37:44.491440 | orchestrator | + port_security_enabled = (known after apply) 2026-03-26 01:37:44.491444 | orchestrator | + qos_policy_id = (known after apply) 2026-03-26 01:37:44.491448 | orchestrator | + region = (known after apply) 2026-03-26 01:37:44.491452 | orchestrator | + security_group_ids = (known after apply) 2026-03-26 01:37:44.491470 | orchestrator | + tenant_id = (known after apply) 2026-03-26 01:37:44.491476 | orchestrator | 2026-03-26 01:37:44.491480 | orchestrator | + allowed_address_pairs { 2026-03-26 01:37:44.491486 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-26 01:37:44.491490 | orchestrator | } 2026-03-26 01:37:44.491494 | orchestrator | + allowed_address_pairs { 2026-03-26 01:37:44.491498 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-26 01:37:44.491502 | orchestrator | } 2026-03-26 01:37:44.491506 | orchestrator | + allowed_address_pairs { 2026-03-26 01:37:44.491510 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-26 01:37:44.491514 | orchestrator | } 2026-03-26 01:37:44.491518 | orchestrator | 2026-03-26 01:37:44.491522 | orchestrator | + binding (known after apply) 2026-03-26 01:37:44.491525 | orchestrator | 2026-03-26 01:37:44.491529 | orchestrator | + fixed_ip { 2026-03-26 01:37:44.491533 | orchestrator | + ip_address = "192.168.16.14" 2026-03-26 01:37:44.491537 | orchestrator | + subnet_id = (known after apply) 2026-03-26 01:37:44.491541 | orchestrator | } 2026-03-26 01:37:44.491545 | orchestrator | } 2026-03-26 01:37:44.491549 | orchestrator | 2026-03-26 01:37:44.491553 | orchestrator | # openstack_networking_port_v2.node_port_management[5] will be created 2026-03-26 01:37:44.491557 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-26 01:37:44.491561 | orchestrator | + admin_state_up = (known after apply) 2026-03-26 01:37:44.491565 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-26 01:37:44.491569 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-26 01:37:44.491573 | orchestrator | + all_tags = (known after apply) 2026-03-26 01:37:44.491577 | orchestrator | + device_id = (known after apply) 2026-03-26 01:37:44.491581 | orchestrator | + device_owner = (known after apply) 2026-03-26 01:37:44.491585 | orchestrator | + dns_assignment = (known after apply) 2026-03-26 01:37:44.491589 | orchestrator | + dns_name = (known after apply) 2026-03-26 01:37:44.491593 | orchestrator | + id = (known after apply) 2026-03-26 01:37:44.491597 | orchestrator | + mac_address = (known after apply) 2026-03-26 01:37:44.491601 | orchestrator | + network_id = (known after apply) 2026-03-26 01:37:44.491605 | orchestrator | + port_security_enabled = (known after apply) 2026-03-26 01:37:44.491609 | orchestrator | + qos_policy_id = (known after apply) 2026-03-26 01:37:44.491617 | orchestrator | + region = (known after apply) 2026-03-26 01:37:44.491621 | orchestrator | + security_group_ids = (known after apply) 2026-03-26 01:37:44.491626 | orchestrator | + tenant_id = (known after apply) 2026-03-26 01:37:44.491630 | orchestrator | 2026-03-26 01:37:44.491634 | orchestrator | + allowed_address_pairs { 2026-03-26 01:37:44.491637 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-26 01:37:44.491641 | orchestrator | } 2026-03-26 01:37:44.491645 | orchestrator | + allowed_address_pairs { 2026-03-26 01:37:44.491649 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-26 01:37:44.491653 | orchestrator | } 2026-03-26 01:37:44.491657 | orchestrator | + allowed_address_pairs { 2026-03-26 01:37:44.491661 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-26 01:37:44.491665 | orchestrator | } 2026-03-26 01:37:44.491669 | orchestrator | 2026-03-26 01:37:44.491673 | orchestrator | + binding (known after apply) 2026-03-26 01:37:44.491677 | orchestrator | 2026-03-26 01:37:44.491681 | orchestrator | + fixed_ip { 2026-03-26 01:37:44.491685 | orchestrator | + ip_address = "192.168.16.15" 2026-03-26 01:37:44.491689 | orchestrator | + subnet_id = (known after apply) 2026-03-26 01:37:44.491693 | orchestrator | } 2026-03-26 01:37:44.491697 | orchestrator | } 2026-03-26 01:37:44.491701 | orchestrator | 2026-03-26 01:37:44.491705 | orchestrator | # openstack_networking_router_interface_v2.router_interface will be created 2026-03-26 01:37:44.491709 | orchestrator | + resource "openstack_networking_router_interface_v2" "router_interface" { 2026-03-26 01:37:44.491713 | orchestrator | + force_destroy = false 2026-03-26 01:37:44.491717 | orchestrator | + id = (known after apply) 2026-03-26 01:37:44.491721 | orchestrator | + port_id = (known after apply) 2026-03-26 01:37:44.491725 | orchestrator | + region = (known after apply) 2026-03-26 01:37:44.491729 | orchestrator | + router_id = (known after apply) 2026-03-26 01:37:44.491733 | orchestrator | + subnet_id = (known after apply) 2026-03-26 01:37:44.491737 | orchestrator | } 2026-03-26 01:37:44.491741 | orchestrator | 2026-03-26 01:37:44.491745 | orchestrator | # openstack_networking_router_v2.router will be created 2026-03-26 01:37:44.491749 | orchestrator | + resource "openstack_networking_router_v2" "router" { 2026-03-26 01:37:44.491753 | orchestrator | + admin_state_up = (known after apply) 2026-03-26 01:37:44.491757 | orchestrator | + all_tags = (known after apply) 2026-03-26 01:37:44.491761 | orchestrator | + availability_zone_hints = [ 2026-03-26 01:37:44.491765 | orchestrator | + "nova", 2026-03-26 01:37:44.491769 | orchestrator | ] 2026-03-26 01:37:44.491773 | orchestrator | + distributed = (known after apply) 2026-03-26 01:37:44.491777 | orchestrator | + enable_snat = (known after apply) 2026-03-26 01:37:44.491781 | orchestrator | + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2026-03-26 01:37:44.491785 | orchestrator | + external_qos_policy_id = (known after apply) 2026-03-26 01:37:44.491789 | orchestrator | + id = (known after apply) 2026-03-26 01:37:44.491793 | orchestrator | + name = "testbed" 2026-03-26 01:37:44.491797 | orchestrator | + region = (known after apply) 2026-03-26 01:37:44.491801 | orchestrator | + tenant_id = (known after apply) 2026-03-26 01:37:44.491817 | orchestrator | 2026-03-26 01:37:44.491821 | orchestrator | + external_fixed_ip (known after apply) 2026-03-26 01:37:44.491825 | orchestrator | } 2026-03-26 01:37:44.491829 | orchestrator | 2026-03-26 01:37:44.491833 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2026-03-26 01:37:44.491841 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2026-03-26 01:37:44.491845 | orchestrator | + description = "ssh" 2026-03-26 01:37:44.491849 | orchestrator | + direction = "ingress" 2026-03-26 01:37:44.491853 | orchestrator | + ethertype = "IPv4" 2026-03-26 01:37:44.491857 | orchestrator | + id = (known after apply) 2026-03-26 01:37:44.491861 | orchestrator | + port_range_max = 22 2026-03-26 01:37:44.491864 | orchestrator | + port_range_min = 22 2026-03-26 01:37:44.491868 | orchestrator | + protocol = "tcp" 2026-03-26 01:37:44.491872 | orchestrator | + region = (known after apply) 2026-03-26 01:37:44.491883 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-26 01:37:44.491887 | orchestrator | + remote_group_id = (known after apply) 2026-03-26 01:37:44.491891 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-26 01:37:44.491895 | orchestrator | + security_group_id = (known after apply) 2026-03-26 01:37:44.491899 | orchestrator | + tenant_id = (known after apply) 2026-03-26 01:37:44.491903 | orchestrator | } 2026-03-26 01:37:44.491907 | orchestrator | 2026-03-26 01:37:44.491911 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2026-03-26 01:37:44.491915 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2026-03-26 01:37:44.491919 | orchestrator | + description = "wireguard" 2026-03-26 01:37:44.491923 | orchestrator | + direction = "ingress" 2026-03-26 01:37:44.491927 | orchestrator | + ethertype = "IPv4" 2026-03-26 01:37:44.491930 | orchestrator | + id = (known after apply) 2026-03-26 01:37:44.491934 | orchestrator | + port_range_max = 51820 2026-03-26 01:37:44.491938 | orchestrator | + port_range_min = 51820 2026-03-26 01:37:44.491942 | orchestrator | + protocol = "udp" 2026-03-26 01:37:44.491946 | orchestrator | + region = (known after apply) 2026-03-26 01:37:44.491950 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-26 01:37:44.491954 | orchestrator | + remote_group_id = (known after apply) 2026-03-26 01:37:44.491958 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-26 01:37:44.491962 | orchestrator | + security_group_id = (known after apply) 2026-03-26 01:37:44.491966 | orchestrator | + tenant_id = (known after apply) 2026-03-26 01:37:44.491970 | orchestrator | } 2026-03-26 01:37:44.491974 | orchestrator | 2026-03-26 01:37:44.491978 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2026-03-26 01:37:44.491982 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2026-03-26 01:37:44.491989 | orchestrator | + direction = "ingress" 2026-03-26 01:37:44.491993 | orchestrator | + ethertype = "IPv4" 2026-03-26 01:37:44.491997 | orchestrator | + id = (known after apply) 2026-03-26 01:37:44.492001 | orchestrator | + protocol = "tcp" 2026-03-26 01:37:44.492005 | orchestrator | + region = (known after apply) 2026-03-26 01:37:44.492009 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-26 01:37:44.492013 | orchestrator | + remote_group_id = (known after apply) 2026-03-26 01:37:44.492017 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-03-26 01:37:44.492020 | orchestrator | + security_group_id = (known after apply) 2026-03-26 01:37:44.492024 | orchestrator | + tenant_id = (known after apply) 2026-03-26 01:37:44.492028 | orchestrator | } 2026-03-26 01:37:44.492032 | orchestrator | 2026-03-26 01:37:44.492036 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2026-03-26 01:37:44.492040 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2026-03-26 01:37:44.492044 | orchestrator | + direction = "ingress" 2026-03-26 01:37:44.492048 | orchestrator | + ethertype = "IPv4" 2026-03-26 01:37:44.492052 | orchestrator | + id = (known after apply) 2026-03-26 01:37:44.492056 | orchestrator | + protocol = "udp" 2026-03-26 01:37:44.492060 | orchestrator | + region = (known after apply) 2026-03-26 01:37:44.492064 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-26 01:37:44.492068 | orchestrator | + remote_group_id = (known after apply) 2026-03-26 01:37:44.492072 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-03-26 01:37:44.492076 | orchestrator | + security_group_id = (known after apply) 2026-03-26 01:37:44.492080 | orchestrator | + tenant_id = (known after apply) 2026-03-26 01:37:44.492084 | orchestrator | } 2026-03-26 01:37:44.492088 | orchestrator | 2026-03-26 01:37:44.492092 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2026-03-26 01:37:44.492099 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2026-03-26 01:37:44.492103 | orchestrator | + direction = "ingress" 2026-03-26 01:37:44.492107 | orchestrator | + ethertype = "IPv4" 2026-03-26 01:37:44.492111 | orchestrator | + id = (known after apply) 2026-03-26 01:37:44.492115 | orchestrator | + protocol = "icmp" 2026-03-26 01:37:44.492119 | orchestrator | + region = (known after apply) 2026-03-26 01:37:44.492123 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-26 01:37:44.492127 | orchestrator | + remote_group_id = (known after apply) 2026-03-26 01:37:44.492131 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-26 01:37:44.492138 | orchestrator | + security_group_id = (known after apply) 2026-03-26 01:37:44.492144 | orchestrator | + tenant_id = (known after apply) 2026-03-26 01:37:44.492151 | orchestrator | } 2026-03-26 01:37:44.492161 | orchestrator | 2026-03-26 01:37:44.492170 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2026-03-26 01:37:44.492176 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2026-03-26 01:37:44.492183 | orchestrator | + direction = "ingress" 2026-03-26 01:37:44.492189 | orchestrator | + ethertype = "IPv4" 2026-03-26 01:37:44.492195 | orchestrator | + id = (known after apply) 2026-03-26 01:37:44.492201 | orchestrator | + protocol = "tcp" 2026-03-26 01:37:44.492207 | orchestrator | + region = (known after apply) 2026-03-26 01:37:44.492213 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-26 01:37:44.492220 | orchestrator | + remote_group_id = (known after apply) 2026-03-26 01:37:44.492226 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-26 01:37:44.492237 | orchestrator | + security_group_id = (known after apply) 2026-03-26 01:37:44.492244 | orchestrator | + tenant_id = (known after apply) 2026-03-26 01:37:44.492250 | orchestrator | } 2026-03-26 01:37:44.492257 | orchestrator | 2026-03-26 01:37:44.492263 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2026-03-26 01:37:44.492270 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2026-03-26 01:37:44.492277 | orchestrator | + direction = "ingress" 2026-03-26 01:37:44.492283 | orchestrator | + ethertype = "IPv4" 2026-03-26 01:37:44.492290 | orchestrator | + id = (known after apply) 2026-03-26 01:37:44.492296 | orchestrator | + protocol = "udp" 2026-03-26 01:37:44.492300 | orchestrator | + region = (known after apply) 2026-03-26 01:37:44.492303 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-26 01:37:44.492307 | orchestrator | + remote_group_id = (known after apply) 2026-03-26 01:37:44.492311 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-26 01:37:44.492315 | orchestrator | + security_group_id = (known after apply) 2026-03-26 01:37:44.492319 | orchestrator | + tenant_id = (known after apply) 2026-03-26 01:37:44.492323 | orchestrator | } 2026-03-26 01:37:44.492327 | orchestrator | 2026-03-26 01:37:44.492331 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2026-03-26 01:37:44.492335 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2026-03-26 01:37:44.492339 | orchestrator | + direction = "ingress" 2026-03-26 01:37:44.492343 | orchestrator | + ethertype = "IPv4" 2026-03-26 01:37:44.492347 | orchestrator | + id = (known after apply) 2026-03-26 01:37:44.492351 | orchestrator | + protocol = "icmp" 2026-03-26 01:37:44.492355 | orchestrator | + region = (known after apply) 2026-03-26 01:37:44.492359 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-26 01:37:44.492363 | orchestrator | + remote_group_id = (known after apply) 2026-03-26 01:37:44.492367 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-26 01:37:44.492370 | orchestrator | + security_group_id = (known after apply) 2026-03-26 01:37:44.492374 | orchestrator | + tenant_id = (known after apply) 2026-03-26 01:37:44.492383 | orchestrator | } 2026-03-26 01:37:44.492388 | orchestrator | 2026-03-26 01:37:44.492392 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2026-03-26 01:37:44.492396 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2026-03-26 01:37:44.492400 | orchestrator | + description = "vrrp" 2026-03-26 01:37:44.492403 | orchestrator | + direction = "ingress" 2026-03-26 01:37:44.492407 | orchestrator | + ethertype = "IPv4" 2026-03-26 01:37:44.492411 | orchestrator | + id = (known after apply) 2026-03-26 01:37:44.492415 | orchestrator | + protocol = "112" 2026-03-26 01:37:44.492419 | orchestrator | + region = (known after apply) 2026-03-26 01:37:44.492423 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-26 01:37:44.492427 | orchestrator | + remote_group_id = (known after apply) 2026-03-26 01:37:44.492431 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-26 01:37:44.492435 | orchestrator | + security_group_id = (known after apply) 2026-03-26 01:37:44.492439 | orchestrator | + tenant_id = (known after apply) 2026-03-26 01:37:44.492442 | orchestrator | } 2026-03-26 01:37:44.492446 | orchestrator | 2026-03-26 01:37:44.492450 | orchestrator | # openstack_networking_secgroup_v2.security_group_management will be created 2026-03-26 01:37:44.492466 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_management" { 2026-03-26 01:37:44.492474 | orchestrator | + all_tags = (known after apply) 2026-03-26 01:37:44.492479 | orchestrator | + description = "management security group" 2026-03-26 01:37:44.492483 | orchestrator | + id = (known after apply) 2026-03-26 01:37:44.492486 | orchestrator | + name = "testbed-management" 2026-03-26 01:37:44.492490 | orchestrator | + region = (known after apply) 2026-03-26 01:37:44.492494 | orchestrator | + stateful = (known after apply) 2026-03-26 01:37:44.492498 | orchestrator | + tenant_id = (known after apply) 2026-03-26 01:37:44.492502 | orchestrator | } 2026-03-26 01:37:44.492506 | orchestrator | 2026-03-26 01:37:44.492510 | orchestrator | # openstack_networking_secgroup_v2.security_group_node will be created 2026-03-26 01:37:44.492514 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_node" { 2026-03-26 01:37:44.492518 | orchestrator | + all_tags = (known after apply) 2026-03-26 01:37:44.492522 | orchestrator | + description = "node security group" 2026-03-26 01:37:44.492526 | orchestrator | + id = (known after apply) 2026-03-26 01:37:44.492530 | orchestrator | + name = "testbed-node" 2026-03-26 01:37:44.492533 | orchestrator | + region = (known after apply) 2026-03-26 01:37:44.492537 | orchestrator | + stateful = (known after apply) 2026-03-26 01:37:44.492541 | orchestrator | + tenant_id = (known after apply) 2026-03-26 01:37:44.492545 | orchestrator | } 2026-03-26 01:37:44.492549 | orchestrator | 2026-03-26 01:37:44.492553 | orchestrator | # openstack_networking_subnet_v2.subnet_management will be created 2026-03-26 01:37:44.492605 | orchestrator | + resource "openstack_networking_subnet_v2" "subnet_management" { 2026-03-26 01:37:44.492610 | orchestrator | + all_tags = (known after apply) 2026-03-26 01:37:44.492614 | orchestrator | + cidr = "192.168.16.0/20" 2026-03-26 01:37:44.492618 | orchestrator | + dns_nameservers = [ 2026-03-26 01:37:44.492622 | orchestrator | + "8.8.8.8", 2026-03-26 01:37:44.492626 | orchestrator | + "9.9.9.9", 2026-03-26 01:37:44.492630 | orchestrator | ] 2026-03-26 01:37:44.492634 | orchestrator | + enable_dhcp = true 2026-03-26 01:37:44.492638 | orchestrator | + gateway_ip = (known after apply) 2026-03-26 01:37:44.492647 | orchestrator | + id = (known after apply) 2026-03-26 01:37:44.492651 | orchestrator | + ip_version = 4 2026-03-26 01:37:44.492655 | orchestrator | + ipv6_address_mode = (known after apply) 2026-03-26 01:37:44.492659 | orchestrator | + ipv6_ra_mode = (known after apply) 2026-03-26 01:37:44.492663 | orchestrator | + name = "subnet-testbed-management" 2026-03-26 01:37:44.492667 | orchestrator | + network_id = (known after apply) 2026-03-26 01:37:44.492671 | orchestrator | + no_gateway = false 2026-03-26 01:37:44.492675 | orchestrator | + region = (known after apply) 2026-03-26 01:37:44.492679 | orchestrator | + service_types = (known after apply) 2026-03-26 01:37:44.492689 | orchestrator | + tenant_id = (known after apply) 2026-03-26 01:37:44.492693 | orchestrator | 2026-03-26 01:37:44.492697 | orchestrator | + allocation_pool { 2026-03-26 01:37:44.492701 | orchestrator | + end = "192.168.31.250" 2026-03-26 01:37:44.492705 | orchestrator | + start = "192.168.31.200" 2026-03-26 01:37:44.492709 | orchestrator | } 2026-03-26 01:37:44.492713 | orchestrator | } 2026-03-26 01:37:44.492716 | orchestrator | 2026-03-26 01:37:44.492720 | orchestrator | # terraform_data.image will be created 2026-03-26 01:37:44.492724 | orchestrator | + resource "terraform_data" "image" { 2026-03-26 01:37:44.492732 | orchestrator | + id = (known after apply) 2026-03-26 01:37:44.492736 | orchestrator | + input = "Ubuntu 24.04" 2026-03-26 01:37:44.492740 | orchestrator | + output = (known after apply) 2026-03-26 01:37:44.492744 | orchestrator | } 2026-03-26 01:37:44.492748 | orchestrator | 2026-03-26 01:37:44.492752 | orchestrator | # terraform_data.image_node will be created 2026-03-26 01:37:44.492756 | orchestrator | + resource "terraform_data" "image_node" { 2026-03-26 01:37:44.492760 | orchestrator | + id = (known after apply) 2026-03-26 01:37:44.492764 | orchestrator | + input = "Ubuntu 24.04" 2026-03-26 01:37:44.492768 | orchestrator | + output = (known after apply) 2026-03-26 01:37:44.492772 | orchestrator | } 2026-03-26 01:37:44.492776 | orchestrator | 2026-03-26 01:37:44.492780 | orchestrator | Plan: 64 to add, 0 to change, 0 to destroy. 2026-03-26 01:37:44.492784 | orchestrator | 2026-03-26 01:37:44.492788 | orchestrator | Changes to Outputs: 2026-03-26 01:37:44.492792 | orchestrator | + manager_address = (sensitive value) 2026-03-26 01:37:44.492796 | orchestrator | + private_key = (sensitive value) 2026-03-26 01:37:44.714782 | orchestrator | terraform_data.image: Creating... 2026-03-26 01:37:44.716553 | orchestrator | terraform_data.image_node: Creating... 2026-03-26 01:37:44.716938 | orchestrator | terraform_data.image_node: Creation complete after 0s [id=08fb72f6-2178-02dd-a5f1-d4706cb277dc] 2026-03-26 01:37:44.718229 | orchestrator | terraform_data.image: Creation complete after 0s [id=8d4cdb9a-566a-9f81-21f9-f3b0207d51e9] 2026-03-26 01:37:44.736628 | orchestrator | data.openstack_images_image_v2.image_node: Reading... 2026-03-26 01:37:44.738886 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2026-03-26 01:37:44.750622 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2026-03-26 01:37:44.750722 | orchestrator | openstack_compute_keypair_v2.key: Creating... 2026-03-26 01:37:44.751707 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2026-03-26 01:37:44.751794 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2026-03-26 01:37:44.752634 | orchestrator | openstack_networking_network_v2.net_management: Creating... 2026-03-26 01:37:44.752682 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2026-03-26 01:37:44.753108 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2026-03-26 01:37:44.762656 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2026-03-26 01:37:45.210858 | orchestrator | data.openstack_images_image_v2.image_node: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-03-26 01:37:45.215290 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2026-03-26 01:37:45.263960 | orchestrator | openstack_compute_keypair_v2.key: Creation complete after 0s [id=testbed] 2026-03-26 01:37:45.271667 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2026-03-26 01:37:45.761076 | orchestrator | openstack_networking_network_v2.net_management: Creation complete after 1s [id=7977b382-565f-4526-9bfa-099755a02bf8] 2026-03-26 01:37:45.765649 | orchestrator | data.openstack_images_image_v2.image: Reading... 2026-03-26 01:37:45.816439 | orchestrator | data.openstack_images_image_v2.image: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-03-26 01:37:45.825652 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2026-03-26 01:37:48.358888 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 3s [id=d11e4e4a-db1d-44df-8da9-5de7e993dd80] 2026-03-26 01:37:48.378630 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 3s [id=47760649-09e9-4ed8-8303-e5ee473a8102] 2026-03-26 01:37:48.382439 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 3s [id=7e352b46-e023-45cf-8a88-51cc46240a44] 2026-03-26 01:37:48.382985 | orchestrator | local_sensitive_file.id_rsa: Creating... 2026-03-26 01:37:48.388699 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2026-03-26 01:37:48.388921 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2026-03-26 01:37:48.390609 | orchestrator | local_sensitive_file.id_rsa: Creation complete after 0s [id=ff9bca06fb0c8ea2aa09bf77e2c39317213b84ec] 2026-03-26 01:37:48.405089 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 3s [id=2dae49df-17cb-48b5-9940-ec5e7ec792d8] 2026-03-26 01:37:48.405424 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2026-03-26 01:37:48.414148 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 3s [id=863ba5d2-7e2f-4393-95a6-83543745d331] 2026-03-26 01:37:48.420333 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2026-03-26 01:37:48.420404 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2026-03-26 01:37:48.430639 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 3s [id=943c088c-5b56-4173-ab64-ec81e1cc816d] 2026-03-26 01:37:48.430851 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 3s [id=a52ec37c-b4ea-4f83-9b16-3c0f6ce85263] 2026-03-26 01:37:48.437395 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creating... 2026-03-26 01:37:48.442212 | orchestrator | local_file.id_rsa_pub: Creating... 2026-03-26 01:37:48.447125 | orchestrator | local_file.id_rsa_pub: Creation complete after 0s [id=66f2e27fe84d66c13d33c369431a63d3178daf01] 2026-03-26 01:37:48.451923 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2026-03-26 01:37:48.473173 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 3s [id=7db5f133-fe7b-42a4-ad57-b076dc1856ab] 2026-03-26 01:37:48.494400 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 3s [id=8ddd7966-84e6-4951-8a08-7b4fb4af2bd2] 2026-03-26 01:37:49.164791 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 3s [id=c374eb4c-3572-4f0b-927c-38d35765f44a] 2026-03-26 01:37:50.035940 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creation complete after 2s [id=e73ccf28-9912-4c70-ab14-4cc9f269ba17] 2026-03-26 01:37:50.042723 | orchestrator | openstack_networking_router_v2.router: Creating... 2026-03-26 01:37:51.763042 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 4s [id=7634648a-b5a4-45bc-ac0b-8484a2642b22] 2026-03-26 01:37:51.787993 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 4s [id=48d73a84-835d-480a-92c3-3edf7ed142ea] 2026-03-26 01:37:51.798954 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 4s [id=ce600cf2-62c4-44aa-8248-5535335c6519] 2026-03-26 01:37:51.816886 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 4s [id=4fa924fa-33d9-43ce-b208-159d6f6ab539] 2026-03-26 01:37:51.818870 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 4s [id=2e41bcf9-ad92-42bb-b49e-289ca95def9f] 2026-03-26 01:37:51.866905 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 4s [id=2ece1d7d-b762-44e0-80cf-0ec8d4e65a06] 2026-03-26 01:37:52.619066 | orchestrator | openstack_networking_router_v2.router: Creation complete after 3s [id=458147c9-9b72-430f-8b8f-65d092fea4f7] 2026-03-26 01:37:52.624318 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creating... 2026-03-26 01:37:52.625350 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creating... 2026-03-26 01:37:52.625952 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creating... 2026-03-26 01:37:52.812003 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=9a5a0685-9793-4191-a4d7-ff62567ebcb4] 2026-03-26 01:37:52.827938 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2026-03-26 01:37:52.831237 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=cb38da93-75ae-4123-a24c-4683e394bfd4] 2026-03-26 01:37:52.831900 | orchestrator | openstack_networking_port_v2.manager_port_management: Creating... 2026-03-26 01:37:52.833507 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2026-03-26 01:37:52.833564 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2026-03-26 01:37:52.833578 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2026-03-26 01:37:52.834357 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2026-03-26 01:37:52.838087 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2026-03-26 01:37:52.838493 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2026-03-26 01:37:52.844222 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creating... 2026-03-26 01:37:52.990731 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 0s [id=c2a1604f-4c2f-48df-9b6a-44bdc3f502d4] 2026-03-26 01:37:52.995882 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2026-03-26 01:37:53.192459 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 0s [id=44bd8a20-d120-46b1-8e0b-a5cf643511cf] 2026-03-26 01:37:53.198438 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2026-03-26 01:37:53.210611 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 0s [id=e4189816-f698-4e66-9031-d3b5023525e2] 2026-03-26 01:37:53.218053 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creating... 2026-03-26 01:37:53.340506 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 0s [id=eecaeda2-c988-4496-836a-5e4bbf345ad6] 2026-03-26 01:37:53.354403 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creating... 2026-03-26 01:37:53.495756 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 0s [id=a7d71b60-6ba9-4719-80a0-0e386a7ff561] 2026-03-26 01:37:53.503368 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creating... 2026-03-26 01:37:53.510303 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creation complete after 1s [id=d0917be5-d774-4ae8-aa8e-88d5b51d1da3] 2026-03-26 01:37:53.519779 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creating... 2026-03-26 01:37:53.824569 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 1s [id=c25c7dd1-b378-4088-95fa-0599b351c201] 2026-03-26 01:37:53.833209 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creating... 2026-03-26 01:37:53.916314 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creation complete after 1s [id=5d571e2f-b701-4263-883f-e7221cc0c90a] 2026-03-26 01:37:53.983735 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 1s [id=2f203c7d-ac9e-403c-a339-ce95ce237fc4] 2026-03-26 01:37:54.066948 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creation complete after 1s [id=0d011944-d664-48dc-afd5-e876547c06cd] 2026-03-26 01:37:54.104322 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creation complete after 0s [id=4acb0156-c5c7-4457-a252-e47665f1efbd] 2026-03-26 01:37:54.145082 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creation complete after 0s [id=da761b17-04fe-4927-b0ae-964e0d48a3d8] 2026-03-26 01:37:54.211766 | orchestrator | openstack_networking_port_v2.manager_port_management: Creation complete after 1s [id=be5d018b-5284-4a13-84e4-e85247bdd4a2] 2026-03-26 01:37:54.300277 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 1s [id=41231804-f8cb-4307-a0c3-4120acada0c0] 2026-03-26 01:37:54.449343 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=495645ce-cf06-4d1e-8843-90be4513fa23] 2026-03-26 01:37:54.461208 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creation complete after 0s [id=c09a035e-b37c-4623-b251-1a605583e2e5] 2026-03-26 01:37:55.291598 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creation complete after 2s [id=ac1dfea1-1e14-4018-8481-5e4ce13cebb6] 2026-03-26 01:37:55.311977 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2026-03-26 01:37:55.325317 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creating... 2026-03-26 01:37:55.327077 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creating... 2026-03-26 01:37:55.343200 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creating... 2026-03-26 01:37:55.344130 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creating... 2026-03-26 01:37:55.344586 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creating... 2026-03-26 01:37:55.349418 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creating... 2026-03-26 01:37:57.108031 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 2s [id=abd7a441-ee41-4e76-ab04-3935b7055fc6] 2026-03-26 01:37:57.117956 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2026-03-26 01:37:57.121412 | orchestrator | local_file.MANAGER_ADDRESS: Creating... 2026-03-26 01:37:57.127334 | orchestrator | local_file.MANAGER_ADDRESS: Creation complete after 0s [id=ec9eca104d958caee09087975d6798eb0bb86b60] 2026-03-26 01:37:57.128363 | orchestrator | local_file.inventory: Creating... 2026-03-26 01:37:57.134276 | orchestrator | local_file.inventory: Creation complete after 0s [id=30d1748c77c82795d6e88e4b1ba40d79e0a8dfb3] 2026-03-26 01:37:57.907808 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=abd7a441-ee41-4e76-ab04-3935b7055fc6] 2026-03-26 01:38:05.326959 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2026-03-26 01:38:05.330271 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2026-03-26 01:38:05.344664 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2026-03-26 01:38:05.344759 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2026-03-26 01:38:05.346098 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2026-03-26 01:38:05.354400 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2026-03-26 01:38:15.327316 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2026-03-26 01:38:15.330711 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2026-03-26 01:38:15.345393 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2026-03-26 01:38:15.345538 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2026-03-26 01:38:15.346604 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2026-03-26 01:38:15.355121 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2026-03-26 01:38:15.819291 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creation complete after 21s [id=471ae1e9-73d9-4b08-8a76-ea77edb20816] 2026-03-26 01:38:15.868637 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creation complete after 21s [id=df4dc442-7268-46a7-a597-7c64799a6d9c] 2026-03-26 01:38:15.932875 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creation complete after 21s [id=70a77876-4543-4e2a-b0a0-13586b389da7] 2026-03-26 01:38:25.327753 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2026-03-26 01:38:25.346901 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2026-03-26 01:38:25.356315 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2026-03-26 01:38:26.047576 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creation complete after 31s [id=827668dc-3e5a-4728-aa0b-89be81db88d7] 2026-03-26 01:38:26.736219 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creation complete after 32s [id=8b5c9d75-54b2-4ca9-914e-7a0667bc48a0] 2026-03-26 01:38:26.762576 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creation complete after 32s [id=65bc15c7-423f-40be-804d-198c847c7c7d] 2026-03-26 01:38:26.795931 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2026-03-26 01:38:26.800371 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2026-03-26 01:38:26.800440 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2026-03-26 01:38:26.801746 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2026-03-26 01:38:26.801967 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2026-03-26 01:38:26.802375 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2026-03-26 01:38:26.810500 | orchestrator | null_resource.node_semaphore: Creating... 2026-03-26 01:38:26.817203 | orchestrator | null_resource.node_semaphore: Creation complete after 0s [id=268805636366249020] 2026-03-26 01:38:26.817622 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2026-03-26 01:38:26.826592 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2026-03-26 01:38:26.829029 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2026-03-26 01:38:26.844343 | orchestrator | openstack_compute_instance_v2.manager_server: Creating... 2026-03-26 01:38:30.168253 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 3s [id=471ae1e9-73d9-4b08-8a76-ea77edb20816/7e352b46-e023-45cf-8a88-51cc46240a44] 2026-03-26 01:38:30.193991 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 3s [id=827668dc-3e5a-4728-aa0b-89be81db88d7/2dae49df-17cb-48b5-9940-ec5e7ec792d8] 2026-03-26 01:38:30.195657 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 3s [id=65bc15c7-423f-40be-804d-198c847c7c7d/8ddd7966-84e6-4951-8a08-7b4fb4af2bd2] 2026-03-26 01:38:30.207899 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 3s [id=471ae1e9-73d9-4b08-8a76-ea77edb20816/a52ec37c-b4ea-4f83-9b16-3c0f6ce85263] 2026-03-26 01:38:30.222830 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 3s [id=65bc15c7-423f-40be-804d-198c847c7c7d/47760649-09e9-4ed8-8303-e5ee473a8102] 2026-03-26 01:38:30.234511 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 3s [id=827668dc-3e5a-4728-aa0b-89be81db88d7/863ba5d2-7e2f-4393-95a6-83543745d331] 2026-03-26 01:38:36.313993 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 9s [id=471ae1e9-73d9-4b08-8a76-ea77edb20816/7db5f133-fe7b-42a4-ad57-b076dc1856ab] 2026-03-26 01:38:36.319105 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 9s [id=827668dc-3e5a-4728-aa0b-89be81db88d7/d11e4e4a-db1d-44df-8da9-5de7e993dd80] 2026-03-26 01:38:36.341102 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 9s [id=65bc15c7-423f-40be-804d-198c847c7c7d/943c088c-5b56-4173-ab64-ec81e1cc816d] 2026-03-26 01:38:36.845081 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2026-03-26 01:38:46.845727 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2026-03-26 01:38:47.186064 | orchestrator | openstack_compute_instance_v2.manager_server: Creation complete after 20s [id=58a35edb-618b-4cc9-abf2-21320a733826] 2026-03-26 01:38:47.198491 | orchestrator | 2026-03-26 01:38:47.198606 | orchestrator | Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2026-03-26 01:38:47.198618 | orchestrator | 2026-03-26 01:38:47.198625 | orchestrator | Outputs: 2026-03-26 01:38:47.198633 | orchestrator | 2026-03-26 01:38:47.198640 | orchestrator | manager_address = 2026-03-26 01:38:47.198648 | orchestrator | private_key = 2026-03-26 01:38:47.504493 | orchestrator | ok: Runtime: 0:01:07.805898 2026-03-26 01:38:47.536183 | 2026-03-26 01:38:47.536307 | TASK [Fetch manager address] 2026-03-26 01:38:48.043909 | orchestrator | ok 2026-03-26 01:38:48.055001 | 2026-03-26 01:38:48.055152 | TASK [Set manager_host address] 2026-03-26 01:38:48.137455 | orchestrator | ok 2026-03-26 01:38:48.146464 | 2026-03-26 01:38:48.146589 | LOOP [Update ansible collections] 2026-03-26 01:38:49.140717 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-26 01:38:49.141117 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-03-26 01:38:49.141178 | orchestrator | Starting galaxy collection install process 2026-03-26 01:38:49.141218 | orchestrator | Process install dependency map 2026-03-26 01:38:49.141254 | orchestrator | Starting collection install process 2026-03-26 01:38:49.141286 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons' 2026-03-26 01:38:49.141321 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons 2026-03-26 01:38:49.141358 | orchestrator | osism.commons:999.0.0 was installed successfully 2026-03-26 01:38:49.141419 | orchestrator | ok: Item: commons Runtime: 0:00:00.648358 2026-03-26 01:38:50.120864 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-03-26 01:38:50.121073 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-26 01:38:50.121126 | orchestrator | Starting galaxy collection install process 2026-03-26 01:38:50.121165 | orchestrator | Process install dependency map 2026-03-26 01:38:50.121201 | orchestrator | Starting collection install process 2026-03-26 01:38:50.121252 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed03/.ansible/collections/ansible_collections/osism/services' 2026-03-26 01:38:50.121288 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/services 2026-03-26 01:38:50.121320 | orchestrator | osism.services:999.0.0 was installed successfully 2026-03-26 01:38:50.121372 | orchestrator | ok: Item: services Runtime: 0:00:00.676251 2026-03-26 01:38:50.144201 | 2026-03-26 01:38:50.144389 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-03-26 01:39:00.751225 | orchestrator | ok 2026-03-26 01:39:00.762093 | 2026-03-26 01:39:00.762233 | TASK [Wait a little longer for the manager so that everything is ready] 2026-03-26 01:40:00.808834 | orchestrator | ok 2026-03-26 01:40:00.818621 | 2026-03-26 01:40:00.818738 | TASK [Fetch manager ssh hostkey] 2026-03-26 01:40:02.400859 | orchestrator | Output suppressed because no_log was given 2026-03-26 01:40:02.420434 | 2026-03-26 01:40:02.420700 | TASK [Get ssh keypair from terraform environment] 2026-03-26 01:40:02.967315 | orchestrator | ok: Runtime: 0:00:00.010601 2026-03-26 01:40:02.983043 | 2026-03-26 01:40:02.983232 | TASK [Point out that the following task takes some time and does not give any output] 2026-03-26 01:40:03.023943 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2026-03-26 01:40:03.035898 | 2026-03-26 01:40:03.036063 | TASK [Run manager part 0] 2026-03-26 01:40:03.983186 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-26 01:40:04.034884 | orchestrator | 2026-03-26 01:40:04.034942 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2026-03-26 01:40:04.034949 | orchestrator | 2026-03-26 01:40:04.034964 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2026-03-26 01:40:06.290274 | orchestrator | ok: [testbed-manager] 2026-03-26 01:40:06.290338 | orchestrator | 2026-03-26 01:40:06.290363 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-03-26 01:40:06.290373 | orchestrator | 2026-03-26 01:40:06.290382 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-26 01:40:08.251550 | orchestrator | ok: [testbed-manager] 2026-03-26 01:40:08.252026 | orchestrator | 2026-03-26 01:40:08.252047 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-03-26 01:40:08.945866 | orchestrator | ok: [testbed-manager] 2026-03-26 01:40:08.945932 | orchestrator | 2026-03-26 01:40:08.945944 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-03-26 01:40:08.996427 | orchestrator | skipping: [testbed-manager] 2026-03-26 01:40:08.996489 | orchestrator | 2026-03-26 01:40:08.996498 | orchestrator | TASK [Update package cache] **************************************************** 2026-03-26 01:40:09.030754 | orchestrator | skipping: [testbed-manager] 2026-03-26 01:40:09.030806 | orchestrator | 2026-03-26 01:40:09.030813 | orchestrator | TASK [Install required packages] *********************************************** 2026-03-26 01:40:09.062140 | orchestrator | skipping: [testbed-manager] 2026-03-26 01:40:09.062193 | orchestrator | 2026-03-26 01:40:09.062201 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-03-26 01:40:09.092592 | orchestrator | skipping: [testbed-manager] 2026-03-26 01:40:09.092691 | orchestrator | 2026-03-26 01:40:09.092709 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-03-26 01:40:09.124457 | orchestrator | skipping: [testbed-manager] 2026-03-26 01:40:09.124507 | orchestrator | 2026-03-26 01:40:09.124515 | orchestrator | TASK [Fail if Ubuntu version is lower than 24.04] ****************************** 2026-03-26 01:40:09.163872 | orchestrator | skipping: [testbed-manager] 2026-03-26 01:40:09.163921 | orchestrator | 2026-03-26 01:40:09.163929 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2026-03-26 01:40:09.210421 | orchestrator | skipping: [testbed-manager] 2026-03-26 01:40:09.210486 | orchestrator | 2026-03-26 01:40:09.210498 | orchestrator | TASK [Set APT options on manager] ********************************************** 2026-03-26 01:40:09.994463 | orchestrator | changed: [testbed-manager] 2026-03-26 01:40:09.994548 | orchestrator | 2026-03-26 01:40:09.994559 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2026-03-26 01:43:09.211535 | orchestrator | changed: [testbed-manager] 2026-03-26 01:43:09.211720 | orchestrator | 2026-03-26 01:43:09.211739 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-03-26 01:44:39.463889 | orchestrator | changed: [testbed-manager] 2026-03-26 01:44:39.463987 | orchestrator | 2026-03-26 01:44:39.464004 | orchestrator | TASK [Install required packages] *********************************************** 2026-03-26 01:45:04.859334 | orchestrator | changed: [testbed-manager] 2026-03-26 01:45:04.859483 | orchestrator | 2026-03-26 01:45:04.859516 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-03-26 01:45:15.033446 | orchestrator | changed: [testbed-manager] 2026-03-26 01:45:15.033595 | orchestrator | 2026-03-26 01:45:15.033628 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-03-26 01:45:15.085901 | orchestrator | ok: [testbed-manager] 2026-03-26 01:45:15.086089 | orchestrator | 2026-03-26 01:45:15.086111 | orchestrator | TASK [Get current user] ******************************************************** 2026-03-26 01:45:15.913709 | orchestrator | ok: [testbed-manager] 2026-03-26 01:45:15.913766 | orchestrator | 2026-03-26 01:45:15.913774 | orchestrator | TASK [Create venv directory] *************************************************** 2026-03-26 01:45:16.666013 | orchestrator | changed: [testbed-manager] 2026-03-26 01:45:16.666108 | orchestrator | 2026-03-26 01:45:16.666120 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2026-03-26 01:45:23.397674 | orchestrator | changed: [testbed-manager] 2026-03-26 01:45:23.397756 | orchestrator | 2026-03-26 01:45:23.397795 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2026-03-26 01:45:30.005964 | orchestrator | changed: [testbed-manager] 2026-03-26 01:45:30.006120 | orchestrator | 2026-03-26 01:45:30.006143 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2026-03-26 01:45:33.071731 | orchestrator | changed: [testbed-manager] 2026-03-26 01:45:33.071776 | orchestrator | 2026-03-26 01:45:33.071785 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2026-03-26 01:45:35.050473 | orchestrator | changed: [testbed-manager] 2026-03-26 01:45:35.051304 | orchestrator | 2026-03-26 01:45:35.051327 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2026-03-26 01:45:36.171195 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-03-26 01:45:36.171246 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-03-26 01:45:36.171254 | orchestrator | 2026-03-26 01:45:36.171260 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2026-03-26 01:45:36.214885 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-03-26 01:45:36.214938 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-03-26 01:45:36.214975 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-03-26 01:45:36.214981 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-03-26 01:45:39.579749 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-03-26 01:45:39.579840 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-03-26 01:45:39.579864 | orchestrator | 2026-03-26 01:45:39.579883 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2026-03-26 01:45:40.159221 | orchestrator | changed: [testbed-manager] 2026-03-26 01:45:40.159308 | orchestrator | 2026-03-26 01:45:40.159321 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2026-03-26 01:46:01.512506 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2026-03-26 01:46:01.512570 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2026-03-26 01:46:01.512584 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2026-03-26 01:46:01.512594 | orchestrator | 2026-03-26 01:46:01.512606 | orchestrator | TASK [Install local collections] *********************************************** 2026-03-26 01:46:03.920886 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2026-03-26 01:46:03.921002 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2026-03-26 01:46:03.921021 | orchestrator | 2026-03-26 01:46:03.921034 | orchestrator | PLAY [Create operator user] **************************************************** 2026-03-26 01:46:03.921047 | orchestrator | 2026-03-26 01:46:03.921059 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-26 01:46:05.466271 | orchestrator | ok: [testbed-manager] 2026-03-26 01:46:05.466344 | orchestrator | 2026-03-26 01:46:05.466353 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-03-26 01:46:05.511902 | orchestrator | ok: [testbed-manager] 2026-03-26 01:46:05.511975 | orchestrator | 2026-03-26 01:46:05.512005 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-03-26 01:46:05.583360 | orchestrator | ok: [testbed-manager] 2026-03-26 01:46:05.583416 | orchestrator | 2026-03-26 01:46:05.583428 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-03-26 01:46:06.367528 | orchestrator | changed: [testbed-manager] 2026-03-26 01:46:06.367631 | orchestrator | 2026-03-26 01:46:06.367655 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-03-26 01:46:07.138323 | orchestrator | changed: [testbed-manager] 2026-03-26 01:46:07.138363 | orchestrator | 2026-03-26 01:46:07.138371 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-03-26 01:46:08.571760 | orchestrator | changed: [testbed-manager] => (item=adm) 2026-03-26 01:46:08.571827 | orchestrator | changed: [testbed-manager] => (item=sudo) 2026-03-26 01:46:08.571833 | orchestrator | 2026-03-26 01:46:08.571850 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-03-26 01:46:10.214450 | orchestrator | changed: [testbed-manager] 2026-03-26 01:46:10.214519 | orchestrator | 2026-03-26 01:46:10.214529 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-03-26 01:46:12.266527 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2026-03-26 01:46:12.266623 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2026-03-26 01:46:12.266635 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2026-03-26 01:46:12.266644 | orchestrator | 2026-03-26 01:46:12.266654 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-03-26 01:46:12.324897 | orchestrator | skipping: [testbed-manager] 2026-03-26 01:46:12.324964 | orchestrator | 2026-03-26 01:46:12.324973 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-03-26 01:46:12.397587 | orchestrator | skipping: [testbed-manager] 2026-03-26 01:46:12.397705 | orchestrator | 2026-03-26 01:46:12.397734 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-03-26 01:46:13.348892 | orchestrator | changed: [testbed-manager] 2026-03-26 01:46:13.348987 | orchestrator | 2026-03-26 01:46:13.349029 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-03-26 01:46:13.424835 | orchestrator | skipping: [testbed-manager] 2026-03-26 01:46:13.424936 | orchestrator | 2026-03-26 01:46:13.424954 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-03-26 01:46:14.483117 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-26 01:46:14.483179 | orchestrator | changed: [testbed-manager] 2026-03-26 01:46:14.483194 | orchestrator | 2026-03-26 01:46:14.483207 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-03-26 01:46:14.525128 | orchestrator | skipping: [testbed-manager] 2026-03-26 01:46:14.525170 | orchestrator | 2026-03-26 01:46:14.525178 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-03-26 01:46:14.566745 | orchestrator | skipping: [testbed-manager] 2026-03-26 01:46:14.566786 | orchestrator | 2026-03-26 01:46:14.566795 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-03-26 01:46:14.597920 | orchestrator | skipping: [testbed-manager] 2026-03-26 01:46:14.597958 | orchestrator | 2026-03-26 01:46:14.597966 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-03-26 01:46:14.671935 | orchestrator | skipping: [testbed-manager] 2026-03-26 01:46:14.671980 | orchestrator | 2026-03-26 01:46:14.671989 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-03-26 01:46:15.560488 | orchestrator | ok: [testbed-manager] 2026-03-26 01:46:15.560527 | orchestrator | 2026-03-26 01:46:15.560532 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-03-26 01:46:15.560537 | orchestrator | 2026-03-26 01:46:15.560541 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-26 01:46:17.109645 | orchestrator | ok: [testbed-manager] 2026-03-26 01:46:17.109692 | orchestrator | 2026-03-26 01:46:17.109700 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2026-03-26 01:46:18.103382 | orchestrator | changed: [testbed-manager] 2026-03-26 01:46:18.103460 | orchestrator | 2026-03-26 01:46:18.103475 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-26 01:46:18.103486 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=14 rescued=0 ignored=0 2026-03-26 01:46:18.103495 | orchestrator | 2026-03-26 01:46:18.317148 | orchestrator | ok: Runtime: 0:06:14.895732 2026-03-26 01:46:18.331467 | 2026-03-26 01:46:18.331643 | TASK [Point out that the log in on the manager is now possible] 2026-03-26 01:46:18.373100 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2026-03-26 01:46:18.380398 | 2026-03-26 01:46:18.380517 | TASK [Point out that the following task takes some time and does not give any output] 2026-03-26 01:46:18.415593 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2026-03-26 01:46:18.424499 | 2026-03-26 01:46:18.424630 | TASK [Run manager part 1 + 2] 2026-03-26 01:46:19.301967 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-26 01:46:19.360797 | orchestrator | 2026-03-26 01:46:19.360850 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2026-03-26 01:46:19.360857 | orchestrator | 2026-03-26 01:46:19.360870 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-26 01:46:22.071168 | orchestrator | ok: [testbed-manager] 2026-03-26 01:46:22.071218 | orchestrator | 2026-03-26 01:46:22.071241 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-03-26 01:46:22.107230 | orchestrator | skipping: [testbed-manager] 2026-03-26 01:46:22.107315 | orchestrator | 2026-03-26 01:46:22.107335 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-03-26 01:46:22.145521 | orchestrator | ok: [testbed-manager] 2026-03-26 01:46:22.145570 | orchestrator | 2026-03-26 01:46:22.145578 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-03-26 01:46:22.184529 | orchestrator | ok: [testbed-manager] 2026-03-26 01:46:22.184588 | orchestrator | 2026-03-26 01:46:22.184598 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-03-26 01:46:22.259769 | orchestrator | ok: [testbed-manager] 2026-03-26 01:46:22.259831 | orchestrator | 2026-03-26 01:46:22.259842 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-03-26 01:46:22.336044 | orchestrator | ok: [testbed-manager] 2026-03-26 01:46:22.336100 | orchestrator | 2026-03-26 01:46:22.336110 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-03-26 01:46:22.399728 | orchestrator | included: /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2026-03-26 01:46:22.399783 | orchestrator | 2026-03-26 01:46:22.399789 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-03-26 01:46:23.228127 | orchestrator | ok: [testbed-manager] 2026-03-26 01:46:23.228197 | orchestrator | 2026-03-26 01:46:23.228208 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-03-26 01:46:23.282700 | orchestrator | skipping: [testbed-manager] 2026-03-26 01:46:23.282761 | orchestrator | 2026-03-26 01:46:23.282770 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-03-26 01:46:24.727488 | orchestrator | changed: [testbed-manager] 2026-03-26 01:46:24.727559 | orchestrator | 2026-03-26 01:46:24.727572 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-03-26 01:46:25.352153 | orchestrator | ok: [testbed-manager] 2026-03-26 01:46:25.352204 | orchestrator | 2026-03-26 01:46:25.352210 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-03-26 01:46:26.516245 | orchestrator | changed: [testbed-manager] 2026-03-26 01:46:26.516304 | orchestrator | 2026-03-26 01:46:26.516313 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-03-26 01:46:43.624804 | orchestrator | changed: [testbed-manager] 2026-03-26 01:46:43.624894 | orchestrator | 2026-03-26 01:46:43.624910 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-03-26 01:46:44.326795 | orchestrator | ok: [testbed-manager] 2026-03-26 01:46:44.326838 | orchestrator | 2026-03-26 01:46:44.326848 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-03-26 01:46:44.382242 | orchestrator | skipping: [testbed-manager] 2026-03-26 01:46:44.382284 | orchestrator | 2026-03-26 01:46:44.382292 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2026-03-26 01:46:45.394825 | orchestrator | changed: [testbed-manager] 2026-03-26 01:46:45.394935 | orchestrator | 2026-03-26 01:46:45.394960 | orchestrator | TASK [Copy SSH private key] **************************************************** 2026-03-26 01:46:46.446817 | orchestrator | changed: [testbed-manager] 2026-03-26 01:46:46.446904 | orchestrator | 2026-03-26 01:46:46.446917 | orchestrator | TASK [Create configuration directory] ****************************************** 2026-03-26 01:46:47.092726 | orchestrator | changed: [testbed-manager] 2026-03-26 01:46:47.092819 | orchestrator | 2026-03-26 01:46:47.092831 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2026-03-26 01:46:47.136754 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-03-26 01:46:47.136828 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-03-26 01:46:47.136834 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-03-26 01:46:47.136839 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-03-26 01:46:49.788033 | orchestrator | changed: [testbed-manager] 2026-03-26 01:46:49.788233 | orchestrator | 2026-03-26 01:46:49.788262 | orchestrator | TASK [Install python requirements in venv] ************************************* 2026-03-26 01:46:59.856515 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2026-03-26 01:46:59.856629 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2026-03-26 01:46:59.856655 | orchestrator | ok: [testbed-manager] => (item=packaging) 2026-03-26 01:46:59.856674 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2026-03-26 01:46:59.856703 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2026-03-26 01:46:59.856720 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2026-03-26 01:46:59.856736 | orchestrator | 2026-03-26 01:46:59.856753 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2026-03-26 01:47:01.027000 | orchestrator | changed: [testbed-manager] 2026-03-26 01:47:01.027144 | orchestrator | 2026-03-26 01:47:01.027174 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2026-03-26 01:47:01.071051 | orchestrator | skipping: [testbed-manager] 2026-03-26 01:47:01.071157 | orchestrator | 2026-03-26 01:47:01.071166 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2026-03-26 01:47:04.636751 | orchestrator | changed: [testbed-manager] 2026-03-26 01:47:04.636797 | orchestrator | 2026-03-26 01:47:04.636806 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2026-03-26 01:47:04.678841 | orchestrator | skipping: [testbed-manager] 2026-03-26 01:47:04.678882 | orchestrator | 2026-03-26 01:47:04.678890 | orchestrator | TASK [Run manager part 2] ****************************************************** 2026-03-26 01:48:56.303138 | orchestrator | changed: [testbed-manager] 2026-03-26 01:48:56.303183 | orchestrator | 2026-03-26 01:48:56.303191 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-03-26 01:48:57.623780 | orchestrator | ok: [testbed-manager] 2026-03-26 01:48:57.624430 | orchestrator | 2026-03-26 01:48:57.624449 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-26 01:48:57.624458 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2026-03-26 01:48:57.624465 | orchestrator | 2026-03-26 01:48:58.066304 | orchestrator | ok: Runtime: 0:02:38.999822 2026-03-26 01:48:58.086086 | 2026-03-26 01:48:58.086259 | TASK [Reboot manager] 2026-03-26 01:48:59.627528 | orchestrator | ok: Runtime: 0:00:00.978791 2026-03-26 01:48:59.644409 | 2026-03-26 01:48:59.644580 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-03-26 01:49:16.056101 | orchestrator | ok 2026-03-26 01:49:16.063875 | 2026-03-26 01:49:16.064008 | TASK [Wait a little longer for the manager so that everything is ready] 2026-03-26 01:50:16.114628 | orchestrator | ok 2026-03-26 01:50:16.124003 | 2026-03-26 01:50:16.124144 | TASK [Deploy manager + bootstrap nodes] 2026-03-26 01:50:19.036866 | orchestrator | 2026-03-26 01:50:19.037080 | orchestrator | # DEPLOY MANAGER 2026-03-26 01:50:19.037104 | orchestrator | 2026-03-26 01:50:19.037125 | orchestrator | + set -e 2026-03-26 01:50:19.037147 | orchestrator | + echo 2026-03-26 01:50:19.037171 | orchestrator | + echo '# DEPLOY MANAGER' 2026-03-26 01:50:19.037199 | orchestrator | + echo 2026-03-26 01:50:19.037256 | orchestrator | + cat /opt/manager-vars.sh 2026-03-26 01:50:19.041598 | orchestrator | export NUMBER_OF_NODES=6 2026-03-26 01:50:19.041683 | orchestrator | 2026-03-26 01:50:19.041702 | orchestrator | export CEPH_VERSION=reef 2026-03-26 01:50:19.041725 | orchestrator | export CONFIGURATION_VERSION=main 2026-03-26 01:50:19.041745 | orchestrator | export MANAGER_VERSION=9.5.0 2026-03-26 01:50:19.041782 | orchestrator | export OPENSTACK_VERSION=2024.2 2026-03-26 01:50:19.041802 | orchestrator | 2026-03-26 01:50:19.041829 | orchestrator | export ARA=false 2026-03-26 01:50:19.041848 | orchestrator | export DEPLOY_MODE=manager 2026-03-26 01:50:19.041872 | orchestrator | export TEMPEST=false 2026-03-26 01:50:19.041891 | orchestrator | export IS_ZUUL=true 2026-03-26 01:50:19.041908 | orchestrator | 2026-03-26 01:50:19.041936 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.54 2026-03-26 01:50:19.041956 | orchestrator | export EXTERNAL_API=false 2026-03-26 01:50:19.041975 | orchestrator | 2026-03-26 01:50:19.041990 | orchestrator | export IMAGE_USER=ubuntu 2026-03-26 01:50:19.042004 | orchestrator | export IMAGE_NODE_USER=ubuntu 2026-03-26 01:50:19.042015 | orchestrator | 2026-03-26 01:50:19.042092 | orchestrator | export CEPH_STACK=ceph-ansible 2026-03-26 01:50:19.042117 | orchestrator | 2026-03-26 01:50:19.042128 | orchestrator | + echo 2026-03-26 01:50:19.042141 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-26 01:50:19.042735 | orchestrator | ++ export INTERACTIVE=false 2026-03-26 01:50:19.042782 | orchestrator | ++ INTERACTIVE=false 2026-03-26 01:50:19.042796 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-26 01:50:19.042809 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-26 01:50:19.042950 | orchestrator | + source /opt/manager-vars.sh 2026-03-26 01:50:19.042968 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-26 01:50:19.042979 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-26 01:50:19.042990 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-26 01:50:19.043001 | orchestrator | ++ CEPH_VERSION=reef 2026-03-26 01:50:19.043021 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-26 01:50:19.043034 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-26 01:50:19.043045 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-26 01:50:19.043056 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-26 01:50:19.043067 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-26 01:50:19.043090 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-26 01:50:19.043101 | orchestrator | ++ export ARA=false 2026-03-26 01:50:19.043112 | orchestrator | ++ ARA=false 2026-03-26 01:50:19.043123 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-26 01:50:19.043134 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-26 01:50:19.043145 | orchestrator | ++ export TEMPEST=false 2026-03-26 01:50:19.043156 | orchestrator | ++ TEMPEST=false 2026-03-26 01:50:19.043167 | orchestrator | ++ export IS_ZUUL=true 2026-03-26 01:50:19.043302 | orchestrator | ++ IS_ZUUL=true 2026-03-26 01:50:19.043318 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.54 2026-03-26 01:50:19.043330 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.54 2026-03-26 01:50:19.043368 | orchestrator | ++ export EXTERNAL_API=false 2026-03-26 01:50:19.043379 | orchestrator | ++ EXTERNAL_API=false 2026-03-26 01:50:19.043390 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-26 01:50:19.043401 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-26 01:50:19.043412 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-26 01:50:19.043422 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-26 01:50:19.043433 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-26 01:50:19.043444 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-26 01:50:19.043455 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2026-03-26 01:50:19.104986 | orchestrator | + docker version 2026-03-26 01:50:19.209257 | orchestrator | Client: Docker Engine - Community 2026-03-26 01:50:19.209416 | orchestrator | Version: 27.5.1 2026-03-26 01:50:19.209437 | orchestrator | API version: 1.47 2026-03-26 01:50:19.209449 | orchestrator | Go version: go1.22.11 2026-03-26 01:50:19.209460 | orchestrator | Git commit: 9f9e405 2026-03-26 01:50:19.209471 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-03-26 01:50:19.209482 | orchestrator | OS/Arch: linux/amd64 2026-03-26 01:50:19.209493 | orchestrator | Context: default 2026-03-26 01:50:19.209504 | orchestrator | 2026-03-26 01:50:19.209515 | orchestrator | Server: Docker Engine - Community 2026-03-26 01:50:19.209526 | orchestrator | Engine: 2026-03-26 01:50:19.209538 | orchestrator | Version: 27.5.1 2026-03-26 01:50:19.209549 | orchestrator | API version: 1.47 (minimum version 1.24) 2026-03-26 01:50:19.209592 | orchestrator | Go version: go1.22.11 2026-03-26 01:50:19.209604 | orchestrator | Git commit: 4c9b3b0 2026-03-26 01:50:19.209614 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-03-26 01:50:19.209625 | orchestrator | OS/Arch: linux/amd64 2026-03-26 01:50:19.209636 | orchestrator | Experimental: false 2026-03-26 01:50:19.209646 | orchestrator | containerd: 2026-03-26 01:50:19.209657 | orchestrator | Version: v2.2.2 2026-03-26 01:50:19.209668 | orchestrator | GitCommit: 301b2dac98f15c27117da5c8af12118a041a31d9 2026-03-26 01:50:19.209679 | orchestrator | runc: 2026-03-26 01:50:19.209690 | orchestrator | Version: 1.3.4 2026-03-26 01:50:19.209701 | orchestrator | GitCommit: v1.3.4-0-gd6d73eb8 2026-03-26 01:50:19.209712 | orchestrator | docker-init: 2026-03-26 01:50:19.209722 | orchestrator | Version: 0.19.0 2026-03-26 01:50:19.209734 | orchestrator | GitCommit: de40ad0 2026-03-26 01:50:19.212891 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2026-03-26 01:50:19.223447 | orchestrator | + set -e 2026-03-26 01:50:19.223523 | orchestrator | + source /opt/manager-vars.sh 2026-03-26 01:50:19.223537 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-26 01:50:19.223548 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-26 01:50:19.223559 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-26 01:50:19.223569 | orchestrator | ++ CEPH_VERSION=reef 2026-03-26 01:50:19.223580 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-26 01:50:19.223592 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-26 01:50:19.223603 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-26 01:50:19.223614 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-26 01:50:19.223625 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-26 01:50:19.223636 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-26 01:50:19.223646 | orchestrator | ++ export ARA=false 2026-03-26 01:50:19.223658 | orchestrator | ++ ARA=false 2026-03-26 01:50:19.223696 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-26 01:50:19.223718 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-26 01:50:19.223739 | orchestrator | ++ export TEMPEST=false 2026-03-26 01:50:19.223750 | orchestrator | ++ TEMPEST=false 2026-03-26 01:50:19.223761 | orchestrator | ++ export IS_ZUUL=true 2026-03-26 01:50:19.223771 | orchestrator | ++ IS_ZUUL=true 2026-03-26 01:50:19.223782 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.54 2026-03-26 01:50:19.223793 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.54 2026-03-26 01:50:19.223804 | orchestrator | ++ export EXTERNAL_API=false 2026-03-26 01:50:19.223814 | orchestrator | ++ EXTERNAL_API=false 2026-03-26 01:50:19.223825 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-26 01:50:19.223835 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-26 01:50:19.223846 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-26 01:50:19.223857 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-26 01:50:19.223868 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-26 01:50:19.223878 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-26 01:50:19.223893 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-26 01:50:19.223905 | orchestrator | ++ export INTERACTIVE=false 2026-03-26 01:50:19.223924 | orchestrator | ++ INTERACTIVE=false 2026-03-26 01:50:19.223942 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-26 01:50:19.223967 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-26 01:50:19.224178 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-03-26 01:50:19.224252 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 9.5.0 2026-03-26 01:50:19.229801 | orchestrator | + set -e 2026-03-26 01:50:19.229826 | orchestrator | + VERSION=9.5.0 2026-03-26 01:50:19.229839 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 9.5.0/g' /opt/configuration/environments/manager/configuration.yml 2026-03-26 01:50:19.239120 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-03-26 01:50:19.239192 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2026-03-26 01:50:19.242556 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2026-03-26 01:50:19.245600 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2026-03-26 01:50:19.252091 | orchestrator | /opt/configuration ~ 2026-03-26 01:50:19.252190 | orchestrator | + set -e 2026-03-26 01:50:19.252208 | orchestrator | + pushd /opt/configuration 2026-03-26 01:50:19.252220 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-26 01:50:19.254100 | orchestrator | + source /opt/venv/bin/activate 2026-03-26 01:50:19.255026 | orchestrator | ++ deactivate nondestructive 2026-03-26 01:50:19.255121 | orchestrator | ++ '[' -n '' ']' 2026-03-26 01:50:19.255217 | orchestrator | ++ '[' -n '' ']' 2026-03-26 01:50:19.255498 | orchestrator | ++ hash -r 2026-03-26 01:50:19.255535 | orchestrator | ++ '[' -n '' ']' 2026-03-26 01:50:19.255554 | orchestrator | ++ unset VIRTUAL_ENV 2026-03-26 01:50:19.255572 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-03-26 01:50:19.255591 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-03-26 01:50:19.255611 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-03-26 01:50:19.255630 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-03-26 01:50:19.255648 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-03-26 01:50:19.255666 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-03-26 01:50:19.255678 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-26 01:50:19.255690 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-26 01:50:19.255701 | orchestrator | ++ export PATH 2026-03-26 01:50:19.255713 | orchestrator | ++ '[' -n '' ']' 2026-03-26 01:50:19.255724 | orchestrator | ++ '[' -z '' ']' 2026-03-26 01:50:19.255735 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-03-26 01:50:19.255746 | orchestrator | ++ PS1='(venv) ' 2026-03-26 01:50:19.255756 | orchestrator | ++ export PS1 2026-03-26 01:50:19.255767 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-03-26 01:50:19.255778 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-03-26 01:50:19.255789 | orchestrator | ++ hash -r 2026-03-26 01:50:19.255800 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2026-03-26 01:50:20.615553 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2026-03-26 01:50:20.616769 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.33.0) 2026-03-26 01:50:20.618592 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2026-03-26 01:50:20.620103 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.3) 2026-03-26 01:50:20.621380 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (26.0) 2026-03-26 01:50:20.633214 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.3.1) 2026-03-26 01:50:20.635848 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2026-03-26 01:50:20.637240 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2026-03-26 01:50:20.639102 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2026-03-26 01:50:20.683961 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.6) 2026-03-26 01:50:20.685938 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.11) 2026-03-26 01:50:20.687710 | orchestrator | Requirement already satisfied: urllib3<3,>=1.26 in /opt/venv/lib/python3.12/site-packages (from requests) (2.6.3) 2026-03-26 01:50:20.689371 | orchestrator | Requirement already satisfied: certifi>=2023.5.7 in /opt/venv/lib/python3.12/site-packages (from requests) (2026.2.25) 2026-03-26 01:50:20.694881 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.3) 2026-03-26 01:50:20.950920 | orchestrator | ++ which gilt 2026-03-26 01:50:20.954239 | orchestrator | + GILT=/opt/venv/bin/gilt 2026-03-26 01:50:20.954307 | orchestrator | + /opt/venv/bin/gilt overlay 2026-03-26 01:50:21.231924 | orchestrator | osism.cfg-generics: 2026-03-26 01:50:21.403886 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2026-03-26 01:50:21.404024 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2026-03-26 01:50:21.404066 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2026-03-26 01:50:21.404086 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2026-03-26 01:50:22.347791 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2026-03-26 01:50:22.359179 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2026-03-26 01:50:22.721674 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2026-03-26 01:50:22.783000 | orchestrator | ~ 2026-03-26 01:50:22.783092 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-26 01:50:22.783104 | orchestrator | + deactivate 2026-03-26 01:50:22.783113 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-03-26 01:50:22.783122 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-26 01:50:22.783129 | orchestrator | + export PATH 2026-03-26 01:50:22.783136 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-03-26 01:50:22.783144 | orchestrator | + '[' -n '' ']' 2026-03-26 01:50:22.783153 | orchestrator | + hash -r 2026-03-26 01:50:22.783160 | orchestrator | + '[' -n '' ']' 2026-03-26 01:50:22.783166 | orchestrator | + unset VIRTUAL_ENV 2026-03-26 01:50:22.783173 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-03-26 01:50:22.783180 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-03-26 01:50:22.783187 | orchestrator | + unset -f deactivate 2026-03-26 01:50:22.783194 | orchestrator | + popd 2026-03-26 01:50:22.784046 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-03-26 01:50:22.784150 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2026-03-26 01:50:22.784644 | orchestrator | ++ semver 9.5.0 7.0.0 2026-03-26 01:50:22.841779 | orchestrator | + [[ 1 -ge 0 ]] 2026-03-26 01:50:22.841866 | orchestrator | + echo 'enable_osism_kubernetes: true' 2026-03-26 01:50:22.842306 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-03-26 01:50:22.900800 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-26 01:50:22.901640 | orchestrator | ++ semver 2024.2 2025.1 2026-03-26 01:50:22.956494 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-26 01:50:22.956599 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2026-03-26 01:50:23.050743 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-26 01:50:23.050894 | orchestrator | + source /opt/venv/bin/activate 2026-03-26 01:50:23.050949 | orchestrator | ++ deactivate nondestructive 2026-03-26 01:50:23.050985 | orchestrator | ++ '[' -n '' ']' 2026-03-26 01:50:23.051002 | orchestrator | ++ '[' -n '' ']' 2026-03-26 01:50:23.051018 | orchestrator | ++ hash -r 2026-03-26 01:50:23.051036 | orchestrator | ++ '[' -n '' ']' 2026-03-26 01:50:23.051052 | orchestrator | ++ unset VIRTUAL_ENV 2026-03-26 01:50:23.051069 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-03-26 01:50:23.051086 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-03-26 01:50:23.051103 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-03-26 01:50:23.051120 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-03-26 01:50:23.051138 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-03-26 01:50:23.051154 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-03-26 01:50:23.051185 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-26 01:50:23.051229 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-26 01:50:23.051247 | orchestrator | ++ export PATH 2026-03-26 01:50:23.051263 | orchestrator | ++ '[' -n '' ']' 2026-03-26 01:50:23.051280 | orchestrator | ++ '[' -z '' ']' 2026-03-26 01:50:23.051297 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-03-26 01:50:23.051313 | orchestrator | ++ PS1='(venv) ' 2026-03-26 01:50:23.051328 | orchestrator | ++ export PS1 2026-03-26 01:50:23.051370 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-03-26 01:50:23.051389 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-03-26 01:50:23.051406 | orchestrator | ++ hash -r 2026-03-26 01:50:23.051423 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2026-03-26 01:50:24.445004 | orchestrator | 2026-03-26 01:50:24.445139 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2026-03-26 01:50:24.445170 | orchestrator | 2026-03-26 01:50:24.445191 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-03-26 01:50:25.081892 | orchestrator | ok: [testbed-manager] 2026-03-26 01:50:25.082000 | orchestrator | 2026-03-26 01:50:25.082089 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-03-26 01:50:26.208594 | orchestrator | changed: [testbed-manager] 2026-03-26 01:50:26.208706 | orchestrator | 2026-03-26 01:50:26.208721 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2026-03-26 01:50:26.208758 | orchestrator | 2026-03-26 01:50:26.208768 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-26 01:50:28.649904 | orchestrator | ok: [testbed-manager] 2026-03-26 01:50:28.650005 | orchestrator | 2026-03-26 01:50:28.650114 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2026-03-26 01:50:28.701816 | orchestrator | ok: [testbed-manager] 2026-03-26 01:50:28.701897 | orchestrator | 2026-03-26 01:50:28.701906 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2026-03-26 01:50:29.204750 | orchestrator | changed: [testbed-manager] 2026-03-26 01:50:29.204875 | orchestrator | 2026-03-26 01:50:29.204907 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2026-03-26 01:50:29.231660 | orchestrator | skipping: [testbed-manager] 2026-03-26 01:50:29.231792 | orchestrator | 2026-03-26 01:50:29.231809 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-03-26 01:50:29.593217 | orchestrator | changed: [testbed-manager] 2026-03-26 01:50:29.593296 | orchestrator | 2026-03-26 01:50:29.593306 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2026-03-26 01:50:29.930191 | orchestrator | ok: [testbed-manager] 2026-03-26 01:50:29.930293 | orchestrator | 2026-03-26 01:50:29.930307 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2026-03-26 01:50:30.055754 | orchestrator | skipping: [testbed-manager] 2026-03-26 01:50:30.055905 | orchestrator | 2026-03-26 01:50:30.055931 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2026-03-26 01:50:30.055947 | orchestrator | 2026-03-26 01:50:30.055957 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-26 01:50:32.082349 | orchestrator | ok: [testbed-manager] 2026-03-26 01:50:32.082497 | orchestrator | 2026-03-26 01:50:32.082512 | orchestrator | TASK [Apply traefik role] ****************************************************** 2026-03-26 01:50:32.189465 | orchestrator | included: osism.services.traefik for testbed-manager 2026-03-26 01:50:32.189589 | orchestrator | 2026-03-26 01:50:32.189612 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2026-03-26 01:50:32.260755 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2026-03-26 01:50:32.260868 | orchestrator | 2026-03-26 01:50:32.260889 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2026-03-26 01:50:33.450642 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2026-03-26 01:50:33.450752 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2026-03-26 01:50:33.450767 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2026-03-26 01:50:33.450780 | orchestrator | 2026-03-26 01:50:33.450795 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2026-03-26 01:50:35.430789 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2026-03-26 01:50:35.430899 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2026-03-26 01:50:35.430912 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2026-03-26 01:50:35.430921 | orchestrator | 2026-03-26 01:50:35.430930 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2026-03-26 01:50:36.109664 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-26 01:50:36.109735 | orchestrator | changed: [testbed-manager] 2026-03-26 01:50:36.109741 | orchestrator | 2026-03-26 01:50:36.109746 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2026-03-26 01:50:36.840706 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-26 01:50:36.840803 | orchestrator | changed: [testbed-manager] 2026-03-26 01:50:36.840818 | orchestrator | 2026-03-26 01:50:36.840828 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2026-03-26 01:50:36.904924 | orchestrator | skipping: [testbed-manager] 2026-03-26 01:50:36.905004 | orchestrator | 2026-03-26 01:50:36.905013 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2026-03-26 01:50:37.290822 | orchestrator | ok: [testbed-manager] 2026-03-26 01:50:37.290925 | orchestrator | 2026-03-26 01:50:37.290941 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2026-03-26 01:50:37.382241 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2026-03-26 01:50:37.382393 | orchestrator | 2026-03-26 01:50:37.382420 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2026-03-26 01:50:38.616867 | orchestrator | changed: [testbed-manager] 2026-03-26 01:50:38.616966 | orchestrator | 2026-03-26 01:50:38.616984 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2026-03-26 01:50:39.543981 | orchestrator | changed: [testbed-manager] 2026-03-26 01:50:39.544085 | orchestrator | 2026-03-26 01:50:39.544102 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2026-03-26 01:50:50.755040 | orchestrator | changed: [testbed-manager] 2026-03-26 01:50:50.755174 | orchestrator | 2026-03-26 01:50:50.755197 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2026-03-26 01:50:50.814627 | orchestrator | skipping: [testbed-manager] 2026-03-26 01:50:50.814708 | orchestrator | 2026-03-26 01:50:50.814739 | orchestrator | PLAY [Deploy manager service] ************************************************** 2026-03-26 01:50:50.814749 | orchestrator | 2026-03-26 01:50:50.814757 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-26 01:50:52.854680 | orchestrator | ok: [testbed-manager] 2026-03-26 01:50:52.854818 | orchestrator | 2026-03-26 01:50:52.854846 | orchestrator | TASK [Apply manager role] ****************************************************** 2026-03-26 01:50:52.978568 | orchestrator | included: osism.services.manager for testbed-manager 2026-03-26 01:50:52.978676 | orchestrator | 2026-03-26 01:50:52.978695 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-03-26 01:50:53.053672 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-03-26 01:50:53.053754 | orchestrator | 2026-03-26 01:50:53.053762 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-03-26 01:50:55.788018 | orchestrator | ok: [testbed-manager] 2026-03-26 01:50:55.788122 | orchestrator | 2026-03-26 01:50:55.788138 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-03-26 01:50:55.848966 | orchestrator | ok: [testbed-manager] 2026-03-26 01:50:55.849069 | orchestrator | 2026-03-26 01:50:55.849086 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-03-26 01:50:55.988570 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-03-26 01:50:55.988691 | orchestrator | 2026-03-26 01:50:55.988715 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-03-26 01:50:58.914944 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2026-03-26 01:50:58.915039 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2026-03-26 01:50:58.915051 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2026-03-26 01:50:58.915060 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2026-03-26 01:50:58.915069 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-03-26 01:50:58.915077 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2026-03-26 01:50:58.915085 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2026-03-26 01:50:58.915094 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2026-03-26 01:50:58.915102 | orchestrator | 2026-03-26 01:50:58.915111 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-03-26 01:50:59.582798 | orchestrator | changed: [testbed-manager] 2026-03-26 01:50:59.582926 | orchestrator | 2026-03-26 01:50:59.582950 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-03-26 01:51:00.263144 | orchestrator | changed: [testbed-manager] 2026-03-26 01:51:00.263222 | orchestrator | 2026-03-26 01:51:00.263233 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-03-26 01:51:00.354903 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-03-26 01:51:00.354979 | orchestrator | 2026-03-26 01:51:00.354987 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-03-26 01:51:01.663001 | orchestrator | changed: [testbed-manager] => (item=ara) 2026-03-26 01:51:01.663103 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2026-03-26 01:51:01.663122 | orchestrator | 2026-03-26 01:51:01.663141 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-03-26 01:51:02.310681 | orchestrator | changed: [testbed-manager] 2026-03-26 01:51:02.310761 | orchestrator | 2026-03-26 01:51:02.310793 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-03-26 01:51:02.375778 | orchestrator | skipping: [testbed-manager] 2026-03-26 01:51:02.375903 | orchestrator | 2026-03-26 01:51:02.375930 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-03-26 01:51:02.470912 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-03-26 01:51:02.471013 | orchestrator | 2026-03-26 01:51:02.471028 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-03-26 01:51:03.156687 | orchestrator | changed: [testbed-manager] 2026-03-26 01:51:03.156810 | orchestrator | 2026-03-26 01:51:03.156828 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-03-26 01:51:03.240179 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-03-26 01:51:03.240262 | orchestrator | 2026-03-26 01:51:03.240271 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-03-26 01:51:04.747636 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-26 01:51:04.747747 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-26 01:51:04.747762 | orchestrator | changed: [testbed-manager] 2026-03-26 01:51:04.747777 | orchestrator | 2026-03-26 01:51:04.747789 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-03-26 01:51:05.468886 | orchestrator | changed: [testbed-manager] 2026-03-26 01:51:05.468989 | orchestrator | 2026-03-26 01:51:05.469006 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-03-26 01:51:05.528565 | orchestrator | skipping: [testbed-manager] 2026-03-26 01:51:05.528658 | orchestrator | 2026-03-26 01:51:05.528672 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-03-26 01:51:05.633801 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-03-26 01:51:05.633899 | orchestrator | 2026-03-26 01:51:05.633914 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-03-26 01:51:06.219342 | orchestrator | changed: [testbed-manager] 2026-03-26 01:51:06.219539 | orchestrator | 2026-03-26 01:51:06.219560 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-03-26 01:51:06.653247 | orchestrator | changed: [testbed-manager] 2026-03-26 01:51:06.653348 | orchestrator | 2026-03-26 01:51:06.653362 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-03-26 01:51:08.127294 | orchestrator | changed: [testbed-manager] => (item=conductor) 2026-03-26 01:51:08.127388 | orchestrator | changed: [testbed-manager] => (item=openstack) 2026-03-26 01:51:08.127400 | orchestrator | 2026-03-26 01:51:08.127457 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-03-26 01:51:08.822549 | orchestrator | changed: [testbed-manager] 2026-03-26 01:51:08.822648 | orchestrator | 2026-03-26 01:51:08.822665 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-03-26 01:51:09.219527 | orchestrator | ok: [testbed-manager] 2026-03-26 01:51:09.219615 | orchestrator | 2026-03-26 01:51:09.219630 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-03-26 01:51:09.551959 | orchestrator | changed: [testbed-manager] 2026-03-26 01:51:09.552065 | orchestrator | 2026-03-26 01:51:09.552089 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-03-26 01:51:09.592558 | orchestrator | skipping: [testbed-manager] 2026-03-26 01:51:09.592659 | orchestrator | 2026-03-26 01:51:09.592685 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-03-26 01:51:09.660645 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-03-26 01:51:09.660739 | orchestrator | 2026-03-26 01:51:09.660752 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-03-26 01:51:09.709673 | orchestrator | ok: [testbed-manager] 2026-03-26 01:51:09.709754 | orchestrator | 2026-03-26 01:51:09.709769 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-03-26 01:51:11.709098 | orchestrator | changed: [testbed-manager] => (item=osism) 2026-03-26 01:51:11.709187 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2026-03-26 01:51:11.709202 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2026-03-26 01:51:11.709215 | orchestrator | 2026-03-26 01:51:11.709228 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-03-26 01:51:12.429306 | orchestrator | changed: [testbed-manager] 2026-03-26 01:51:12.429456 | orchestrator | 2026-03-26 01:51:12.429479 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-03-26 01:51:13.187282 | orchestrator | changed: [testbed-manager] 2026-03-26 01:51:13.187408 | orchestrator | 2026-03-26 01:51:13.187529 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-03-26 01:51:13.903005 | orchestrator | changed: [testbed-manager] 2026-03-26 01:51:13.903113 | orchestrator | 2026-03-26 01:51:13.903130 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-03-26 01:51:13.995308 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-03-26 01:51:13.995406 | orchestrator | 2026-03-26 01:51:13.995484 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-03-26 01:51:14.055396 | orchestrator | ok: [testbed-manager] 2026-03-26 01:51:14.055524 | orchestrator | 2026-03-26 01:51:14.055536 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-03-26 01:51:14.810888 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2026-03-26 01:51:14.810995 | orchestrator | 2026-03-26 01:51:14.811011 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-03-26 01:51:14.908486 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-03-26 01:51:14.908615 | orchestrator | 2026-03-26 01:51:14.908645 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-03-26 01:51:15.683636 | orchestrator | changed: [testbed-manager] 2026-03-26 01:51:15.683729 | orchestrator | 2026-03-26 01:51:15.683742 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-03-26 01:51:16.348677 | orchestrator | ok: [testbed-manager] 2026-03-26 01:51:16.348793 | orchestrator | 2026-03-26 01:51:16.348816 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-03-26 01:51:16.414962 | orchestrator | skipping: [testbed-manager] 2026-03-26 01:51:16.415112 | orchestrator | 2026-03-26 01:51:16.415154 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-03-26 01:51:16.483200 | orchestrator | ok: [testbed-manager] 2026-03-26 01:51:16.483302 | orchestrator | 2026-03-26 01:51:16.483316 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-03-26 01:51:17.379507 | orchestrator | changed: [testbed-manager] 2026-03-26 01:51:17.379624 | orchestrator | 2026-03-26 01:51:17.379640 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-03-26 01:52:32.809560 | orchestrator | changed: [testbed-manager] 2026-03-26 01:52:32.809681 | orchestrator | 2026-03-26 01:52:32.809698 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-03-26 01:52:33.880475 | orchestrator | ok: [testbed-manager] 2026-03-26 01:52:33.880645 | orchestrator | 2026-03-26 01:52:33.880664 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-03-26 01:52:33.939014 | orchestrator | skipping: [testbed-manager] 2026-03-26 01:52:33.939141 | orchestrator | 2026-03-26 01:52:33.939168 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-03-26 01:52:36.774878 | orchestrator | changed: [testbed-manager] 2026-03-26 01:52:36.774999 | orchestrator | 2026-03-26 01:52:36.775011 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-03-26 01:52:36.888908 | orchestrator | ok: [testbed-manager] 2026-03-26 01:52:36.889003 | orchestrator | 2026-03-26 01:52:36.889017 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-03-26 01:52:36.889028 | orchestrator | 2026-03-26 01:52:36.889037 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-03-26 01:52:36.944657 | orchestrator | skipping: [testbed-manager] 2026-03-26 01:52:36.944744 | orchestrator | 2026-03-26 01:52:36.944755 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-03-26 01:53:37.000726 | orchestrator | Pausing for 60 seconds 2026-03-26 01:53:37.000816 | orchestrator | changed: [testbed-manager] 2026-03-26 01:53:37.000827 | orchestrator | 2026-03-26 01:53:37.000834 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-03-26 01:53:40.682972 | orchestrator | changed: [testbed-manager] 2026-03-26 01:53:40.683099 | orchestrator | 2026-03-26 01:53:40.683113 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-03-26 01:54:42.846249 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-03-26 01:54:42.846371 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-03-26 01:54:42.846411 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (48 retries left). 2026-03-26 01:54:42.846426 | orchestrator | changed: [testbed-manager] 2026-03-26 01:54:42.846441 | orchestrator | 2026-03-26 01:54:42.846454 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-03-26 01:54:54.478804 | orchestrator | changed: [testbed-manager] 2026-03-26 01:54:54.478946 | orchestrator | 2026-03-26 01:54:54.478974 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-03-26 01:54:54.587332 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-03-26 01:54:54.587514 | orchestrator | 2026-03-26 01:54:54.587535 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-03-26 01:54:54.587549 | orchestrator | 2026-03-26 01:54:54.587560 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-03-26 01:54:54.649109 | orchestrator | skipping: [testbed-manager] 2026-03-26 01:54:54.649235 | orchestrator | 2026-03-26 01:54:54.649267 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-03-26 01:54:54.738104 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-03-26 01:54:54.738198 | orchestrator | 2026-03-26 01:54:54.738211 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-03-26 01:54:55.552412 | orchestrator | changed: [testbed-manager] 2026-03-26 01:54:55.552542 | orchestrator | 2026-03-26 01:54:55.552570 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-03-26 01:54:58.969004 | orchestrator | ok: [testbed-manager] 2026-03-26 01:54:58.969099 | orchestrator | 2026-03-26 01:54:58.969111 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-03-26 01:54:59.037293 | orchestrator | ok: [testbed-manager] => { 2026-03-26 01:54:59.037363 | orchestrator | "version_check_result.stdout_lines": [ 2026-03-26 01:54:59.037370 | orchestrator | "=== OSISM Container Version Check ===", 2026-03-26 01:54:59.037375 | orchestrator | "Checking running containers against expected versions...", 2026-03-26 01:54:59.037380 | orchestrator | "", 2026-03-26 01:54:59.037385 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-03-26 01:54:59.037390 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:0.20251130.0", 2026-03-26 01:54:59.037395 | orchestrator | " Enabled: true", 2026-03-26 01:54:59.037399 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:0.20251130.0", 2026-03-26 01:54:59.037403 | orchestrator | " Status: ✅ MATCH", 2026-03-26 01:54:59.037407 | orchestrator | "", 2026-03-26 01:54:59.037411 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-03-26 01:54:59.037430 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:0.20251130.0", 2026-03-26 01:54:59.037434 | orchestrator | " Enabled: true", 2026-03-26 01:54:59.037438 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:0.20251130.0", 2026-03-26 01:54:59.037441 | orchestrator | " Status: ✅ MATCH", 2026-03-26 01:54:59.037445 | orchestrator | "", 2026-03-26 01:54:59.037449 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-03-26 01:54:59.037453 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:0.20251130.0", 2026-03-26 01:54:59.037457 | orchestrator | " Enabled: true", 2026-03-26 01:54:59.037461 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:0.20251130.0", 2026-03-26 01:54:59.037464 | orchestrator | " Status: ✅ MATCH", 2026-03-26 01:54:59.037468 | orchestrator | "", 2026-03-26 01:54:59.037472 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-03-26 01:54:59.037476 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:0.20251130.0", 2026-03-26 01:54:59.037479 | orchestrator | " Enabled: true", 2026-03-26 01:54:59.037483 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:0.20251130.0", 2026-03-26 01:54:59.037487 | orchestrator | " Status: ✅ MATCH", 2026-03-26 01:54:59.037491 | orchestrator | "", 2026-03-26 01:54:59.037496 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-03-26 01:54:59.037500 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:0.20251130.0", 2026-03-26 01:54:59.037504 | orchestrator | " Enabled: true", 2026-03-26 01:54:59.037507 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:0.20251130.0", 2026-03-26 01:54:59.037511 | orchestrator | " Status: ✅ MATCH", 2026-03-26 01:54:59.037515 | orchestrator | "", 2026-03-26 01:54:59.037519 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-03-26 01:54:59.037522 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-26 01:54:59.037526 | orchestrator | " Enabled: true", 2026-03-26 01:54:59.037530 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-26 01:54:59.037533 | orchestrator | " Status: ✅ MATCH", 2026-03-26 01:54:59.037537 | orchestrator | "", 2026-03-26 01:54:59.037541 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-03-26 01:54:59.037545 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-03-26 01:54:59.037548 | orchestrator | " Enabled: true", 2026-03-26 01:54:59.037553 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-03-26 01:54:59.037557 | orchestrator | " Status: ✅ MATCH", 2026-03-26 01:54:59.037560 | orchestrator | "", 2026-03-26 01:54:59.037564 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-03-26 01:54:59.037568 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-03-26 01:54:59.037571 | orchestrator | " Enabled: true", 2026-03-26 01:54:59.037575 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-03-26 01:54:59.037579 | orchestrator | " Status: ✅ MATCH", 2026-03-26 01:54:59.037583 | orchestrator | "", 2026-03-26 01:54:59.037586 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-03-26 01:54:59.037590 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:0.20251130.1", 2026-03-26 01:54:59.037594 | orchestrator | " Enabled: true", 2026-03-26 01:54:59.037597 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:0.20251130.1", 2026-03-26 01:54:59.037601 | orchestrator | " Status: ✅ MATCH", 2026-03-26 01:54:59.037605 | orchestrator | "", 2026-03-26 01:54:59.037608 | orchestrator | "Checking service: redis (Redis Cache)", 2026-03-26 01:54:59.037612 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-03-26 01:54:59.037616 | orchestrator | " Enabled: true", 2026-03-26 01:54:59.037619 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-03-26 01:54:59.037623 | orchestrator | " Status: ✅ MATCH", 2026-03-26 01:54:59.037627 | orchestrator | "", 2026-03-26 01:54:59.037630 | orchestrator | "Checking service: api (OSISM API Service)", 2026-03-26 01:54:59.037634 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-26 01:54:59.037694 | orchestrator | " Enabled: true", 2026-03-26 01:54:59.037702 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-26 01:54:59.037708 | orchestrator | " Status: ✅ MATCH", 2026-03-26 01:54:59.037714 | orchestrator | "", 2026-03-26 01:54:59.037720 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-03-26 01:54:59.037726 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-26 01:54:59.037732 | orchestrator | " Enabled: true", 2026-03-26 01:54:59.037737 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-26 01:54:59.037741 | orchestrator | " Status: ✅ MATCH", 2026-03-26 01:54:59.037745 | orchestrator | "", 2026-03-26 01:54:59.037749 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-03-26 01:54:59.037753 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-26 01:54:59.037756 | orchestrator | " Enabled: true", 2026-03-26 01:54:59.037760 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-26 01:54:59.037764 | orchestrator | " Status: ✅ MATCH", 2026-03-26 01:54:59.037768 | orchestrator | "", 2026-03-26 01:54:59.037771 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-03-26 01:54:59.037775 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-26 01:54:59.037779 | orchestrator | " Enabled: true", 2026-03-26 01:54:59.037783 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-26 01:54:59.037795 | orchestrator | " Status: ✅ MATCH", 2026-03-26 01:54:59.037799 | orchestrator | "", 2026-03-26 01:54:59.037803 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-03-26 01:54:59.037807 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-26 01:54:59.037816 | orchestrator | " Enabled: true", 2026-03-26 01:54:59.037820 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-03-26 01:54:59.037824 | orchestrator | " Status: ✅ MATCH", 2026-03-26 01:54:59.037828 | orchestrator | "", 2026-03-26 01:54:59.037832 | orchestrator | "=== Summary ===", 2026-03-26 01:54:59.037835 | orchestrator | "Errors (version mismatches): 0", 2026-03-26 01:54:59.037839 | orchestrator | "Warnings (expected containers not running): 0", 2026-03-26 01:54:59.037843 | orchestrator | "", 2026-03-26 01:54:59.037847 | orchestrator | "✅ All running containers match expected versions!" 2026-03-26 01:54:59.037851 | orchestrator | ] 2026-03-26 01:54:59.037854 | orchestrator | } 2026-03-26 01:54:59.037859 | orchestrator | 2026-03-26 01:54:59.037863 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-03-26 01:54:59.093319 | orchestrator | skipping: [testbed-manager] 2026-03-26 01:54:59.093391 | orchestrator | 2026-03-26 01:54:59.093398 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-26 01:54:59.093405 | orchestrator | testbed-manager : ok=70 changed=37 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2026-03-26 01:54:59.093409 | orchestrator | 2026-03-26 01:54:59.215769 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-26 01:54:59.215842 | orchestrator | + deactivate 2026-03-26 01:54:59.215850 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-03-26 01:54:59.215865 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-26 01:54:59.215871 | orchestrator | + export PATH 2026-03-26 01:54:59.215877 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-03-26 01:54:59.215883 | orchestrator | + '[' -n '' ']' 2026-03-26 01:54:59.215888 | orchestrator | + hash -r 2026-03-26 01:54:59.215893 | orchestrator | + '[' -n '' ']' 2026-03-26 01:54:59.215899 | orchestrator | + unset VIRTUAL_ENV 2026-03-26 01:54:59.215904 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-03-26 01:54:59.215910 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-03-26 01:54:59.215915 | orchestrator | + unset -f deactivate 2026-03-26 01:54:59.215921 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2026-03-26 01:54:59.223629 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-03-26 01:54:59.223764 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-03-26 01:54:59.223817 | orchestrator | + local max_attempts=60 2026-03-26 01:54:59.223834 | orchestrator | + local name=ceph-ansible 2026-03-26 01:54:59.223850 | orchestrator | + local attempt_num=1 2026-03-26 01:54:59.224252 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-26 01:54:59.259318 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-26 01:54:59.259415 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-03-26 01:54:59.259431 | orchestrator | + local max_attempts=60 2026-03-26 01:54:59.259444 | orchestrator | + local name=kolla-ansible 2026-03-26 01:54:59.259455 | orchestrator | + local attempt_num=1 2026-03-26 01:54:59.260368 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-03-26 01:54:59.301212 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-26 01:54:59.301304 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-03-26 01:54:59.301317 | orchestrator | + local max_attempts=60 2026-03-26 01:54:59.301328 | orchestrator | + local name=osism-ansible 2026-03-26 01:54:59.301337 | orchestrator | + local attempt_num=1 2026-03-26 01:54:59.302314 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-03-26 01:54:59.352266 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-26 01:54:59.352384 | orchestrator | + [[ true == \t\r\u\e ]] 2026-03-26 01:54:59.352407 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-03-26 01:55:00.145568 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-03-26 01:55:00.345011 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-03-26 01:55:00.345077 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:0.20251130.0 "/entrypoint.sh osis…" ceph-ansible 2 minutes ago Up About a minute (healthy) 2026-03-26 01:55:00.345084 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:0.20251130.0 "/entrypoint.sh osis…" kolla-ansible 2 minutes ago Up About a minute (healthy) 2026-03-26 01:55:00.345089 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" api 2 minutes ago Up 2 minutes (healthy) 192.168.16.5:8000->8000/tcp 2026-03-26 01:55:00.345094 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server 2 minutes ago Up 2 minutes (healthy) 8000/tcp 2026-03-26 01:55:00.345104 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" beat 2 minutes ago Up 2 minutes (healthy) 2026-03-26 01:55:00.345108 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" flower 2 minutes ago Up 2 minutes (healthy) 2026-03-26 01:55:00.345112 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:0.20251130.0 "/sbin/tini -- /entr…" inventory_reconciler 2 minutes ago Up About a minute (healthy) 2026-03-26 01:55:00.345116 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" listener 2 minutes ago Up 2 minutes (healthy) 2026-03-26 01:55:00.345120 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb 2 minutes ago Up 2 minutes (healthy) 3306/tcp 2026-03-26 01:55:00.345124 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" openstack 2 minutes ago Up 2 minutes (healthy) 2026-03-26 01:55:00.345127 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis 2 minutes ago Up 2 minutes (healthy) 6379/tcp 2026-03-26 01:55:00.345131 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:0.20251130.0 "/entrypoint.sh osis…" osism-ansible 2 minutes ago Up About a minute (healthy) 2026-03-26 01:55:00.345147 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:0.20251130.1 "docker-entrypoint.s…" frontend 2 minutes ago Up 2 minutes 192.168.16.5:3000->3000/tcp 2026-03-26 01:55:00.345152 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:0.20251130.0 "/entrypoint.sh osis…" osism-kubernetes 2 minutes ago Up About a minute (healthy) 2026-03-26 01:55:00.345156 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- sleep…" osismclient 2 minutes ago Up 2 minutes (healthy) 2026-03-26 01:55:00.350804 | orchestrator | ++ semver 9.5.0 7.0.0 2026-03-26 01:55:00.412127 | orchestrator | + [[ 1 -ge 0 ]] 2026-03-26 01:55:00.412223 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2026-03-26 01:55:00.416265 | orchestrator | + osism apply resolvconf -l testbed-manager 2026-03-26 01:55:12.852937 | orchestrator | 2026-03-26 01:55:12 | INFO  | Task b338eb90-9c5b-4feb-973b-e917f7311fb8 (resolvconf) was prepared for execution. 2026-03-26 01:55:12.853034 | orchestrator | 2026-03-26 01:55:12 | INFO  | It takes a moment until task b338eb90-9c5b-4feb-973b-e917f7311fb8 (resolvconf) has been started and output is visible here. 2026-03-26 01:55:27.566943 | orchestrator | 2026-03-26 01:55:27.568054 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2026-03-26 01:55:27.568110 | orchestrator | 2026-03-26 01:55:27.568127 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-26 01:55:27.568143 | orchestrator | Thursday 26 March 2026 01:55:17 +0000 (0:00:00.157) 0:00:00.157 ******** 2026-03-26 01:55:27.568157 | orchestrator | ok: [testbed-manager] 2026-03-26 01:55:27.568171 | orchestrator | 2026-03-26 01:55:27.568186 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-03-26 01:55:27.568201 | orchestrator | Thursday 26 March 2026 01:55:21 +0000 (0:00:03.974) 0:00:04.131 ******** 2026-03-26 01:55:27.568217 | orchestrator | skipping: [testbed-manager] 2026-03-26 01:55:27.568234 | orchestrator | 2026-03-26 01:55:27.568249 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-03-26 01:55:27.568263 | orchestrator | Thursday 26 March 2026 01:55:21 +0000 (0:00:00.071) 0:00:04.203 ******** 2026-03-26 01:55:27.568280 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2026-03-26 01:55:27.568296 | orchestrator | 2026-03-26 01:55:27.568311 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-03-26 01:55:27.568325 | orchestrator | Thursday 26 March 2026 01:55:21 +0000 (0:00:00.091) 0:00:04.294 ******** 2026-03-26 01:55:27.568364 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2026-03-26 01:55:27.568380 | orchestrator | 2026-03-26 01:55:27.568395 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-03-26 01:55:27.568410 | orchestrator | Thursday 26 March 2026 01:55:21 +0000 (0:00:00.081) 0:00:04.376 ******** 2026-03-26 01:55:27.568425 | orchestrator | ok: [testbed-manager] 2026-03-26 01:55:27.568441 | orchestrator | 2026-03-26 01:55:27.568457 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-03-26 01:55:27.568473 | orchestrator | Thursday 26 March 2026 01:55:22 +0000 (0:00:01.247) 0:00:05.623 ******** 2026-03-26 01:55:27.568488 | orchestrator | skipping: [testbed-manager] 2026-03-26 01:55:27.568583 | orchestrator | 2026-03-26 01:55:27.568599 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-03-26 01:55:27.568615 | orchestrator | Thursday 26 March 2026 01:55:22 +0000 (0:00:00.066) 0:00:05.690 ******** 2026-03-26 01:55:27.568716 | orchestrator | ok: [testbed-manager] 2026-03-26 01:55:27.568737 | orchestrator | 2026-03-26 01:55:27.568830 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-03-26 01:55:27.568848 | orchestrator | Thursday 26 March 2026 01:55:23 +0000 (0:00:00.528) 0:00:06.218 ******** 2026-03-26 01:55:27.568863 | orchestrator | skipping: [testbed-manager] 2026-03-26 01:55:27.568878 | orchestrator | 2026-03-26 01:55:27.568892 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-03-26 01:55:27.568908 | orchestrator | Thursday 26 March 2026 01:55:23 +0000 (0:00:00.091) 0:00:06.310 ******** 2026-03-26 01:55:27.568923 | orchestrator | changed: [testbed-manager] 2026-03-26 01:55:27.568938 | orchestrator | 2026-03-26 01:55:27.568953 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-03-26 01:55:27.568972 | orchestrator | Thursday 26 March 2026 01:55:23 +0000 (0:00:00.562) 0:00:06.872 ******** 2026-03-26 01:55:27.568988 | orchestrator | changed: [testbed-manager] 2026-03-26 01:55:27.569003 | orchestrator | 2026-03-26 01:55:27.569017 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-03-26 01:55:27.569033 | orchestrator | Thursday 26 March 2026 01:55:25 +0000 (0:00:01.131) 0:00:08.004 ******** 2026-03-26 01:55:27.569049 | orchestrator | ok: [testbed-manager] 2026-03-26 01:55:27.569066 | orchestrator | 2026-03-26 01:55:27.569082 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-03-26 01:55:27.569098 | orchestrator | Thursday 26 March 2026 01:55:26 +0000 (0:00:01.017) 0:00:09.021 ******** 2026-03-26 01:55:27.569116 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2026-03-26 01:55:27.569131 | orchestrator | 2026-03-26 01:55:27.569145 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-03-26 01:55:27.569159 | orchestrator | Thursday 26 March 2026 01:55:26 +0000 (0:00:00.084) 0:00:09.106 ******** 2026-03-26 01:55:27.569173 | orchestrator | changed: [testbed-manager] 2026-03-26 01:55:27.569189 | orchestrator | 2026-03-26 01:55:27.569204 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-26 01:55:27.569221 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-26 01:55:27.569236 | orchestrator | 2026-03-26 01:55:27.569251 | orchestrator | 2026-03-26 01:55:27.569267 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-26 01:55:27.569416 | orchestrator | Thursday 26 March 2026 01:55:27 +0000 (0:00:01.180) 0:00:10.286 ******** 2026-03-26 01:55:27.569496 | orchestrator | =============================================================================== 2026-03-26 01:55:27.569514 | orchestrator | Gathering Facts --------------------------------------------------------- 3.97s 2026-03-26 01:55:27.569529 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.25s 2026-03-26 01:55:27.569543 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.18s 2026-03-26 01:55:27.569556 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.13s 2026-03-26 01:55:27.569571 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 1.02s 2026-03-26 01:55:27.569586 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.56s 2026-03-26 01:55:27.569631 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.53s 2026-03-26 01:55:27.569648 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.09s 2026-03-26 01:55:27.569737 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.09s 2026-03-26 01:55:27.569756 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.08s 2026-03-26 01:55:27.569772 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.08s 2026-03-26 01:55:27.569787 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.07s 2026-03-26 01:55:27.569819 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.07s 2026-03-26 01:55:27.882105 | orchestrator | + osism apply sshconfig 2026-03-26 01:55:39.981929 | orchestrator | 2026-03-26 01:55:39 | INFO  | Task c0f329a2-f521-4b7e-839d-4cec16d416a2 (sshconfig) was prepared for execution. 2026-03-26 01:55:39.982147 | orchestrator | 2026-03-26 01:55:39 | INFO  | It takes a moment until task c0f329a2-f521-4b7e-839d-4cec16d416a2 (sshconfig) has been started and output is visible here. 2026-03-26 01:55:52.463583 | orchestrator | 2026-03-26 01:55:52.463796 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2026-03-26 01:55:52.463828 | orchestrator | 2026-03-26 01:55:52.463872 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2026-03-26 01:55:52.463893 | orchestrator | Thursday 26 March 2026 01:55:44 +0000 (0:00:00.169) 0:00:00.169 ******** 2026-03-26 01:55:52.463911 | orchestrator | ok: [testbed-manager] 2026-03-26 01:55:52.463929 | orchestrator | 2026-03-26 01:55:52.463945 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2026-03-26 01:55:52.463962 | orchestrator | Thursday 26 March 2026 01:55:44 +0000 (0:00:00.551) 0:00:00.721 ******** 2026-03-26 01:55:52.463978 | orchestrator | changed: [testbed-manager] 2026-03-26 01:55:52.463996 | orchestrator | 2026-03-26 01:55:52.464013 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2026-03-26 01:55:52.464029 | orchestrator | Thursday 26 March 2026 01:55:45 +0000 (0:00:00.563) 0:00:01.284 ******** 2026-03-26 01:55:52.464046 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2026-03-26 01:55:52.464063 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2026-03-26 01:55:52.464080 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2026-03-26 01:55:52.464098 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2026-03-26 01:55:52.464115 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2026-03-26 01:55:52.464131 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2026-03-26 01:55:52.464149 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2026-03-26 01:55:52.464165 | orchestrator | 2026-03-26 01:55:52.464181 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2026-03-26 01:55:52.464197 | orchestrator | Thursday 26 March 2026 01:55:51 +0000 (0:00:06.025) 0:00:07.310 ******** 2026-03-26 01:55:52.464215 | orchestrator | skipping: [testbed-manager] 2026-03-26 01:55:52.464233 | orchestrator | 2026-03-26 01:55:52.464250 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2026-03-26 01:55:52.464267 | orchestrator | Thursday 26 March 2026 01:55:51 +0000 (0:00:00.088) 0:00:07.398 ******** 2026-03-26 01:55:52.464284 | orchestrator | changed: [testbed-manager] 2026-03-26 01:55:52.464300 | orchestrator | 2026-03-26 01:55:52.464317 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-26 01:55:52.464336 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-26 01:55:52.464354 | orchestrator | 2026-03-26 01:55:52.464370 | orchestrator | 2026-03-26 01:55:52.464387 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-26 01:55:52.464404 | orchestrator | Thursday 26 March 2026 01:55:52 +0000 (0:00:00.631) 0:00:08.029 ******** 2026-03-26 01:55:52.464421 | orchestrator | =============================================================================== 2026-03-26 01:55:52.464439 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 6.03s 2026-03-26 01:55:52.464456 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.63s 2026-03-26 01:55:52.464472 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.56s 2026-03-26 01:55:52.464490 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.55s 2026-03-26 01:55:52.464507 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.09s 2026-03-26 01:55:52.791032 | orchestrator | + osism apply known-hosts 2026-03-26 01:56:04.858923 | orchestrator | 2026-03-26 01:56:04 | INFO  | Task 1fa85f69-74d9-41fc-88eb-077113f41686 (known-hosts) was prepared for execution. 2026-03-26 01:56:04.859076 | orchestrator | 2026-03-26 01:56:04 | INFO  | It takes a moment until task 1fa85f69-74d9-41fc-88eb-077113f41686 (known-hosts) has been started and output is visible here. 2026-03-26 01:56:22.988728 | orchestrator | 2026-03-26 01:56:22.988824 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2026-03-26 01:56:22.988835 | orchestrator | 2026-03-26 01:56:22.988842 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2026-03-26 01:56:22.988850 | orchestrator | Thursday 26 March 2026 01:56:09 +0000 (0:00:00.179) 0:00:00.179 ******** 2026-03-26 01:56:22.988857 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-03-26 01:56:22.988864 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-03-26 01:56:22.988870 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-03-26 01:56:22.988877 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-03-26 01:56:22.988883 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-03-26 01:56:22.988889 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-03-26 01:56:22.988895 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-03-26 01:56:22.988901 | orchestrator | 2026-03-26 01:56:22.988907 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2026-03-26 01:56:22.988915 | orchestrator | Thursday 26 March 2026 01:56:15 +0000 (0:00:06.471) 0:00:06.651 ******** 2026-03-26 01:56:22.988922 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-03-26 01:56:22.988930 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-03-26 01:56:22.988936 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-03-26 01:56:22.988942 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-03-26 01:56:22.988948 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-03-26 01:56:22.988962 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-03-26 01:56:22.988969 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-03-26 01:56:22.988975 | orchestrator | 2026-03-26 01:56:22.988981 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-26 01:56:22.988987 | orchestrator | Thursday 26 March 2026 01:56:15 +0000 (0:00:00.175) 0:00:06.827 ******** 2026-03-26 01:56:22.988994 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJ02fclj3yBlaRFCMRlBqzxa6JpoNn0+0cesF8DzED/Y) 2026-03-26 01:56:22.989006 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCjl441PvBESBlyBRx1uu1H4fE7tRu8wdi33Q8JFqeLTaQZjzDEdIz6uCS4P7vly4E/qLLf8bB/8UcNRyQNcICQGP4n1FNR/EKvBi3/Gyc8AFEHmafJBMQ+fPJKZT16DdkyJaug5SebtzLWoBu55VpcJBQEuazelbRRNG8Xz6iBrkfu/Kec/g0Rno6KOcdUjXqp9eXijo2MLaMUS8xKb9xvjzA8lG94Nfv5kLJxesAhlOo3+QdD7zaeMZ7NUtCfAV0/o8Ruvk/OCfm+9xqBh34HvSOpZATYXPchqL5K63v6jn0PFjt3YFZ0gIUmA1WSFQpb6laHtVAv7J1OW2xdYo6gotU1Cr37/4ig+HHn9P+oQgsmR48E/LQGDOE8e2lb9vbBc00XUT6BPP5pLf9JyRJiE8oeTuYbPu7EAcxRe0Ub7IiJPGMHrzRYtxLMNbIcIC3Albt9+yYod/l1CB/xtv2cBu/ChuADP3Zn4908Xfv7x+enwcnvCi1DiX8TxkEolms=) 2026-03-26 01:56:22.989033 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBP5uNsSXygIZlJfKHbiEnkj18urKp5QMG+27OXFSyVeZCLFzqulV6Rzhv/iPWmiLWxR1gs0vX6AnWLfHyOAmsI4=) 2026-03-26 01:56:22.989040 | orchestrator | 2026-03-26 01:56:22.989046 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-26 01:56:22.989053 | orchestrator | Thursday 26 March 2026 01:56:17 +0000 (0:00:01.243) 0:00:08.071 ******** 2026-03-26 01:56:22.989074 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC7MP4ethAPLioYMXHsztWTZ5ixht+3Sq7KMlb6HBjt4WWGu5avxBzCqnNkYFhF7TM+NZp4FPbph0s1vt1MbCjwAX101M9p/xXyIg6HqiULDQGPnSM9m+LRiEsrASOQGF6aFihMRdcJRhnREJZq9qGdmItuWj9KhGkPm8uRtSFbq+buvlAumNVyRGX/2g9hRT6OPPaBs/0vnzqWm0PY24uelp14behZeTqbe67Xef6Ejb2CPSpASVYAK7cBu4QLBtZt+ComHffq2pp5kC/BZzYHIkNSsmHez8A+uBRCSgkLTqIoCB1KlEDd4YUFkUqRwLSO3P+YeXwrdsuLV64aUMYp+mx/hn1UhZf52iPN5/4dhu/u1pNdyfBgkAI8YresJOdSH6Ztn91oOS0NfNvirYGQGgZQ8F6GAHw/Jqj4Tsu0/0pBD6p34E7ve6hpBikPhSLPtsJmfT5TuU1FdhBMtq6IGmYZfCa24W3TuSRsZnpNeoagjyEJOIujgECw1Gl5Vfc=) 2026-03-26 01:56:22.989086 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEJBAfKNBqc6hZaPCToSyiH071v/szTHinZTYmvWtQiYY2BD9OmGZbDU8lB1KJXImA4sH06NV8BnsqYFdWToJUQ=) 2026-03-26 01:56:22.989096 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHe8Qmub5XmBttmFEUBkU1KnPXMx15s5oXemfYKWwqe/) 2026-03-26 01:56:22.989106 | orchestrator | 2026-03-26 01:56:22.989116 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-26 01:56:22.989126 | orchestrator | Thursday 26 March 2026 01:56:18 +0000 (0:00:01.203) 0:00:09.274 ******** 2026-03-26 01:56:22.989136 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC+4xLf8E9LJe8aia/EpM0v0rX+8sflF8vLZvkHgEXWEhHuSqtxzIL8ACbdmnHD33O2W0UBsUAyKb19kCZTvLXTqmZ2+J24oE3ZFXc8YL5hTd5Yl4TRhBUjg9m31EizACs/0jAMW2ZKSmiAC0t9dhTj2ZMMJ9YMi/kGEhZ1c3/HNB0H3V8krwQERPfDvu8cEHaZVKLA2E86hUr7O4LpXVaMjYarBwX2WTY5YK2l3fkb7rJAigD7P858LeetHWGn6w4gw6illPyvLV50RMhF4BD/KdhvdPV2Zilyh/H23zhuvOy1he82obLlwFNK7mibsaxgEJrsx0VpcFsA6bygql94UvQR3dKTb4Rc34++oxIQgdFsdrIKHadlhGcaEekjU6ZI0BKd0qE511yymd1N3ooYoVHbf8mhhr1oKWq/os46wBLhEyuyNYD3xS2L/YX2jFpKgiU4PIYyJZy+WDIkiuZNjucyKTdIci0+wIJYghYdt9ZWtDIyBEJn6QjDxRRYWaE=) 2026-03-26 01:56:22.989146 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLOteKbqWmsecSgYs+xqbb6nNATEUQK4q13kbcpg1b1i+s0qETmOp+d8ADubOahpy9BafeOlLLvip9rzRVjYfSs=) 2026-03-26 01:56:22.989156 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICb08xQfqtFt6WLy7uNMC8Hgb7gIH7sGbdc6Gp7j5cy2) 2026-03-26 01:56:22.989166 | orchestrator | 2026-03-26 01:56:22.989176 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-26 01:56:22.989186 | orchestrator | Thursday 26 March 2026 01:56:19 +0000 (0:00:01.208) 0:00:10.482 ******** 2026-03-26 01:56:22.989197 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDM4uc4YzFGwjqMlk336TEFW9Sr5NydBFeDixkdguq0xoKZ7Av9HzodJRVzGN627myZhV4E815uWI3x9reyYZn0RcrcDzleefzPfPoWxct0zKx0MfT2zwMJ22JNwOTn6REmWHbevXA5ITeutZAPxJdPrwoS8s1a1Buj3Ky0tfIaCqCRAkIV5P1pJRFSlt7gZL0ImYnlZoIDfKZ/cBT9BzsRT8CWHHY37KfeVwJm37huH9rMBh/eHolG5yLNXGvKrTq0QKC1mMY2qK0APt4GmFl8KPTnNLajycV5GheWblGibpQGFM93+2Z/WqnRJf82EIxu8yCqPo3/1FhS5lFWtQRK9+I/5G7KpU3S/WCcAmUn+w2yg1uU3CAEV+RycYxvp2lqJTbwlZx/pTBotJ9R7cUBQ5aW0Nsvp8aytFY0HRc4Xa8z0Pg6UlJTSdG0IENfARZSKYkJ1g+2JRpe/lOoXOqpWEA4KW+XdPodGxf0D2YqCd/rGxlBauwYYWbZ3Imo7ic=) 2026-03-26 01:56:22.989251 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAubqEkG0vhsyENkRhNUdy8+oCOQJ2VjqsD1HLNi3GC9S06/AsvC5szSrOM8xPq6YUCb0ZAm8wLM6tpF+FpQCkI=) 2026-03-26 01:56:22.989262 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIILAw9GUy2vpfwElSJ1fGmtyFuqmYW2r9XbGM4GRo7eY) 2026-03-26 01:56:22.989273 | orchestrator | 2026-03-26 01:56:22.989283 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-26 01:56:22.989293 | orchestrator | Thursday 26 March 2026 01:56:20 +0000 (0:00:01.216) 0:00:11.699 ******** 2026-03-26 01:56:22.989369 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDJ5PAiQENk+ItZkO7lCzckgIds0Q2WgCbLVLzPfl9J/MU02VMGGoMDDgddubcewkixXkbNvIDK1stUBpmzi0L0j6dHcUMdFvRgqm8GCoMsXNrfHW/ct+73eNQxULwT9bQllXOZr2nB7MltedCplvwjiFJBdmFrGAIp+GrXL0PMApIE/dd1f1qPoBloRBPpdJX1CjF1OGaQsZ4qpz7WpJrlw7thtAHdcylkEbcstauBnNgTubYjVaIo2uIJXiCWNIL0fPhNIHAMKddURALWDzhs+Xh/MW65iFiI8YfPERzwYogm9ZGXr3rX08YWezcrnK5zImuNAFUMNYrmHbLAjzsIRCewlKYooBB2dtd3+XDSqNrvX7+tVW6K8NlU917X/WU52YVtcXFisGbVrw/elyi1iLd7EwmJ9QCUMobcLT0ImLsPJAR5WpjsxApgqPjXLiRWoBFIT4ZJh3VGUe5BPDf8dT/Ij1IENkwR4wNsSm+8yKbokryeCpNJ/83tRYElNd8=) 2026-03-26 01:56:22.989382 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKbUTEgC6bw2kPbQ69BQKKbddt6FzzQXjl7gPSrh85KeKra/4AOp2yhnEkwPsOgQiMMH5NRrWQXsymB0y+9b1Q8=) 2026-03-26 01:56:22.989392 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGuDGp4QP0xBuwhTX5o6OTk2Lr58UqgVCuxQtBHOizHi) 2026-03-26 01:56:22.989402 | orchestrator | 2026-03-26 01:56:22.989412 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-26 01:56:22.989422 | orchestrator | Thursday 26 March 2026 01:56:21 +0000 (0:00:01.175) 0:00:12.875 ******** 2026-03-26 01:56:22.989441 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNUp+hESk7uzg35hbkvvf+NUU3AkgbCjVB2Zkr8YJaR+BFh9GPPoZZsfxubkFFdXWW3nEN+8mFY1PIXNSyE0jYU=) 2026-03-26 01:56:34.808488 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCtvM026ThskiZdDILB9UjPU47PsDHtiHncBI7eSm7qoZ/vFmGYxPZa4NYpRQd13zEH+bhVZEQoiEJjSXZTdefjqImCiIVEtQYLak7OohLyh+btJqEp4MZX1p+Ef87NJt6iZXTmRO9hJa7iuKN5UU6QTJuvgvy/KEw7KR/yqIwc90ds6MtpKtAKEnqodONOccyVKbdgfAngkmZ2uye4IM8jrwODbjTbvmztDxJ3Sp24S7z173HSnLssG6zn5hxZxlMJQPsshb1LZB7JEF6lISzJIVNgyjszZc+PJPIXOQ0eOfc5gVES9pBSljtremJUbyNhJTxEYrER41dO5QupDhOo2kWeNPdHCnP2TzghsdM8MYFFRQrxKuPY9/o9ba8oW0xMHcVSQiUXpxZ0envlNeCzmYATs6z4y/XPHjA1K+boWRjzlBpTOsPyxFYR8NWg16MV5V4YwxIGBkMwr86umV4mJ+4qP03CktNELRx8hMDTGQ1vt5H3sjQnWKAb5bwbJnk=) 2026-03-26 01:56:34.808624 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIC/uIeNDlNnWqqbf4nQ1iFXm8xqmTno9JV+7U0mLmoBx) 2026-03-26 01:56:34.808645 | orchestrator | 2026-03-26 01:56:34.808657 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-26 01:56:34.808669 | orchestrator | Thursday 26 March 2026 01:56:22 +0000 (0:00:01.136) 0:00:14.011 ******** 2026-03-26 01:56:34.808680 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCgKIZ+Izl5D9pHv6Kxhr5AyZEEKn9/BK9MNhQb/QhaO78ATvWpiJlyn30Si4IfRm0v7dCgB0sW8NB5CvAGq/ZGVQE3hr4tNBZrraVdJW4ItePEAsUdTBKwaGp9SU7bwlkZDWNr4EKmSRIefON4mC+Ybn4LJccP5VfAIqMv7w/fcREx9xmgnP7OprQLP3T35T1ObQBXrDu4nOSoxOUa3eeTBzb0M2JXK31R8MAkfBpl3xt/3e/97jB5oOcK9t9kbC3QwZmIYAcx+XCSmxyQ7kcR5PTtZoBVwzPXqPMU0tCc0UXDEJRnsTrXIJ0wArclmDuuIyLundoxRn57nEmHByjdHzJ5u/RqZYNJHs0LWyPDC7FOQ2ek+xnsnzjizES2z96sVFk385BUNTp+/Iv74/8jOP7Glx2+Q3TQc5BCM45SISAWv/C65y24IJLwG5hfzk0+g1I/moUPPDThzsC9weUzFs4C0dDIYlckw90A0PCUIFSGzaLQtti2t3GTfb6oISU=) 2026-03-26 01:56:34.808691 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBK2OXpbGe9PH2FJGGvIa5N+853ic5OP4B+gsXUDRcJHyzkcI+mMK2k9rDDpbgobH0QeLp+WUIjUIHSmvkdzCtC4=) 2026-03-26 01:56:34.808807 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIK2b9SZZ/SJ6wJY/vZIQL6KvWIfJRQxK2nxhYfFkytaC) 2026-03-26 01:56:34.808828 | orchestrator | 2026-03-26 01:56:34.808843 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2026-03-26 01:56:34.808858 | orchestrator | Thursday 26 March 2026 01:56:24 +0000 (0:00:01.211) 0:00:15.223 ******** 2026-03-26 01:56:34.808875 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-03-26 01:56:34.808891 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-03-26 01:56:34.808906 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-03-26 01:56:34.808920 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-03-26 01:56:34.808935 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-03-26 01:56:34.808951 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-03-26 01:56:34.808968 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-03-26 01:56:34.808985 | orchestrator | 2026-03-26 01:56:34.809001 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2026-03-26 01:56:34.809018 | orchestrator | Thursday 26 March 2026 01:56:29 +0000 (0:00:05.758) 0:00:20.981 ******** 2026-03-26 01:56:34.809029 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-03-26 01:56:34.809041 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-03-26 01:56:34.809054 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-03-26 01:56:34.809071 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-03-26 01:56:34.809086 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-03-26 01:56:34.809102 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-03-26 01:56:34.809117 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-03-26 01:56:34.809132 | orchestrator | 2026-03-26 01:56:34.809171 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-26 01:56:34.809187 | orchestrator | Thursday 26 March 2026 01:56:30 +0000 (0:00:00.184) 0:00:21.166 ******** 2026-03-26 01:56:34.809205 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCjl441PvBESBlyBRx1uu1H4fE7tRu8wdi33Q8JFqeLTaQZjzDEdIz6uCS4P7vly4E/qLLf8bB/8UcNRyQNcICQGP4n1FNR/EKvBi3/Gyc8AFEHmafJBMQ+fPJKZT16DdkyJaug5SebtzLWoBu55VpcJBQEuazelbRRNG8Xz6iBrkfu/Kec/g0Rno6KOcdUjXqp9eXijo2MLaMUS8xKb9xvjzA8lG94Nfv5kLJxesAhlOo3+QdD7zaeMZ7NUtCfAV0/o8Ruvk/OCfm+9xqBh34HvSOpZATYXPchqL5K63v6jn0PFjt3YFZ0gIUmA1WSFQpb6laHtVAv7J1OW2xdYo6gotU1Cr37/4ig+HHn9P+oQgsmR48E/LQGDOE8e2lb9vbBc00XUT6BPP5pLf9JyRJiE8oeTuYbPu7EAcxRe0Ub7IiJPGMHrzRYtxLMNbIcIC3Albt9+yYod/l1CB/xtv2cBu/ChuADP3Zn4908Xfv7x+enwcnvCi1DiX8TxkEolms=) 2026-03-26 01:56:34.809234 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBP5uNsSXygIZlJfKHbiEnkj18urKp5QMG+27OXFSyVeZCLFzqulV6Rzhv/iPWmiLWxR1gs0vX6AnWLfHyOAmsI4=) 2026-03-26 01:56:34.809267 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJ02fclj3yBlaRFCMRlBqzxa6JpoNn0+0cesF8DzED/Y) 2026-03-26 01:56:34.809283 | orchestrator | 2026-03-26 01:56:34.809301 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-26 01:56:34.809318 | orchestrator | Thursday 26 March 2026 01:56:31 +0000 (0:00:01.113) 0:00:22.279 ******** 2026-03-26 01:56:34.809335 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC7MP4ethAPLioYMXHsztWTZ5ixht+3Sq7KMlb6HBjt4WWGu5avxBzCqnNkYFhF7TM+NZp4FPbph0s1vt1MbCjwAX101M9p/xXyIg6HqiULDQGPnSM9m+LRiEsrASOQGF6aFihMRdcJRhnREJZq9qGdmItuWj9KhGkPm8uRtSFbq+buvlAumNVyRGX/2g9hRT6OPPaBs/0vnzqWm0PY24uelp14behZeTqbe67Xef6Ejb2CPSpASVYAK7cBu4QLBtZt+ComHffq2pp5kC/BZzYHIkNSsmHez8A+uBRCSgkLTqIoCB1KlEDd4YUFkUqRwLSO3P+YeXwrdsuLV64aUMYp+mx/hn1UhZf52iPN5/4dhu/u1pNdyfBgkAI8YresJOdSH6Ztn91oOS0NfNvirYGQGgZQ8F6GAHw/Jqj4Tsu0/0pBD6p34E7ve6hpBikPhSLPtsJmfT5TuU1FdhBMtq6IGmYZfCa24W3TuSRsZnpNeoagjyEJOIujgECw1Gl5Vfc=) 2026-03-26 01:56:34.809352 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEJBAfKNBqc6hZaPCToSyiH071v/szTHinZTYmvWtQiYY2BD9OmGZbDU8lB1KJXImA4sH06NV8BnsqYFdWToJUQ=) 2026-03-26 01:56:34.809368 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHe8Qmub5XmBttmFEUBkU1KnPXMx15s5oXemfYKWwqe/) 2026-03-26 01:56:34.809383 | orchestrator | 2026-03-26 01:56:34.809399 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-26 01:56:34.809415 | orchestrator | Thursday 26 March 2026 01:56:32 +0000 (0:00:01.189) 0:00:23.469 ******** 2026-03-26 01:56:34.809431 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC+4xLf8E9LJe8aia/EpM0v0rX+8sflF8vLZvkHgEXWEhHuSqtxzIL8ACbdmnHD33O2W0UBsUAyKb19kCZTvLXTqmZ2+J24oE3ZFXc8YL5hTd5Yl4TRhBUjg9m31EizACs/0jAMW2ZKSmiAC0t9dhTj2ZMMJ9YMi/kGEhZ1c3/HNB0H3V8krwQERPfDvu8cEHaZVKLA2E86hUr7O4LpXVaMjYarBwX2WTY5YK2l3fkb7rJAigD7P858LeetHWGn6w4gw6illPyvLV50RMhF4BD/KdhvdPV2Zilyh/H23zhuvOy1he82obLlwFNK7mibsaxgEJrsx0VpcFsA6bygql94UvQR3dKTb4Rc34++oxIQgdFsdrIKHadlhGcaEekjU6ZI0BKd0qE511yymd1N3ooYoVHbf8mhhr1oKWq/os46wBLhEyuyNYD3xS2L/YX2jFpKgiU4PIYyJZy+WDIkiuZNjucyKTdIci0+wIJYghYdt9ZWtDIyBEJn6QjDxRRYWaE=) 2026-03-26 01:56:34.809449 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLOteKbqWmsecSgYs+xqbb6nNATEUQK4q13kbcpg1b1i+s0qETmOp+d8ADubOahpy9BafeOlLLvip9rzRVjYfSs=) 2026-03-26 01:56:34.809466 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICb08xQfqtFt6WLy7uNMC8Hgb7gIH7sGbdc6Gp7j5cy2) 2026-03-26 01:56:34.809483 | orchestrator | 2026-03-26 01:56:34.809498 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-26 01:56:34.809515 | orchestrator | Thursday 26 March 2026 01:56:33 +0000 (0:00:01.183) 0:00:24.653 ******** 2026-03-26 01:56:34.809525 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAubqEkG0vhsyENkRhNUdy8+oCOQJ2VjqsD1HLNi3GC9S06/AsvC5szSrOM8xPq6YUCb0ZAm8wLM6tpF+FpQCkI=) 2026-03-26 01:56:34.809535 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIILAw9GUy2vpfwElSJ1fGmtyFuqmYW2r9XbGM4GRo7eY) 2026-03-26 01:56:34.809564 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDM4uc4YzFGwjqMlk336TEFW9Sr5NydBFeDixkdguq0xoKZ7Av9HzodJRVzGN627myZhV4E815uWI3x9reyYZn0RcrcDzleefzPfPoWxct0zKx0MfT2zwMJ22JNwOTn6REmWHbevXA5ITeutZAPxJdPrwoS8s1a1Buj3Ky0tfIaCqCRAkIV5P1pJRFSlt7gZL0ImYnlZoIDfKZ/cBT9BzsRT8CWHHY37KfeVwJm37huH9rMBh/eHolG5yLNXGvKrTq0QKC1mMY2qK0APt4GmFl8KPTnNLajycV5GheWblGibpQGFM93+2Z/WqnRJf82EIxu8yCqPo3/1FhS5lFWtQRK9+I/5G7KpU3S/WCcAmUn+w2yg1uU3CAEV+RycYxvp2lqJTbwlZx/pTBotJ9R7cUBQ5aW0Nsvp8aytFY0HRc4Xa8z0Pg6UlJTSdG0IENfARZSKYkJ1g+2JRpe/lOoXOqpWEA4KW+XdPodGxf0D2YqCd/rGxlBauwYYWbZ3Imo7ic=) 2026-03-26 01:56:39.803924 | orchestrator | 2026-03-26 01:56:39.804032 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-26 01:56:39.804048 | orchestrator | Thursday 26 March 2026 01:56:34 +0000 (0:00:01.177) 0:00:25.830 ******** 2026-03-26 01:56:39.804059 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKbUTEgC6bw2kPbQ69BQKKbddt6FzzQXjl7gPSrh85KeKra/4AOp2yhnEkwPsOgQiMMH5NRrWQXsymB0y+9b1Q8=) 2026-03-26 01:56:39.804072 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGuDGp4QP0xBuwhTX5o6OTk2Lr58UqgVCuxQtBHOizHi) 2026-03-26 01:56:39.804085 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDJ5PAiQENk+ItZkO7lCzckgIds0Q2WgCbLVLzPfl9J/MU02VMGGoMDDgddubcewkixXkbNvIDK1stUBpmzi0L0j6dHcUMdFvRgqm8GCoMsXNrfHW/ct+73eNQxULwT9bQllXOZr2nB7MltedCplvwjiFJBdmFrGAIp+GrXL0PMApIE/dd1f1qPoBloRBPpdJX1CjF1OGaQsZ4qpz7WpJrlw7thtAHdcylkEbcstauBnNgTubYjVaIo2uIJXiCWNIL0fPhNIHAMKddURALWDzhs+Xh/MW65iFiI8YfPERzwYogm9ZGXr3rX08YWezcrnK5zImuNAFUMNYrmHbLAjzsIRCewlKYooBB2dtd3+XDSqNrvX7+tVW6K8NlU917X/WU52YVtcXFisGbVrw/elyi1iLd7EwmJ9QCUMobcLT0ImLsPJAR5WpjsxApgqPjXLiRWoBFIT4ZJh3VGUe5BPDf8dT/Ij1IENkwR4wNsSm+8yKbokryeCpNJ/83tRYElNd8=) 2026-03-26 01:56:39.804098 | orchestrator | 2026-03-26 01:56:39.804108 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-26 01:56:39.804118 | orchestrator | Thursday 26 March 2026 01:56:35 +0000 (0:00:01.199) 0:00:27.030 ******** 2026-03-26 01:56:39.804128 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCtvM026ThskiZdDILB9UjPU47PsDHtiHncBI7eSm7qoZ/vFmGYxPZa4NYpRQd13zEH+bhVZEQoiEJjSXZTdefjqImCiIVEtQYLak7OohLyh+btJqEp4MZX1p+Ef87NJt6iZXTmRO9hJa7iuKN5UU6QTJuvgvy/KEw7KR/yqIwc90ds6MtpKtAKEnqodONOccyVKbdgfAngkmZ2uye4IM8jrwODbjTbvmztDxJ3Sp24S7z173HSnLssG6zn5hxZxlMJQPsshb1LZB7JEF6lISzJIVNgyjszZc+PJPIXOQ0eOfc5gVES9pBSljtremJUbyNhJTxEYrER41dO5QupDhOo2kWeNPdHCnP2TzghsdM8MYFFRQrxKuPY9/o9ba8oW0xMHcVSQiUXpxZ0envlNeCzmYATs6z4y/XPHjA1K+boWRjzlBpTOsPyxFYR8NWg16MV5V4YwxIGBkMwr86umV4mJ+4qP03CktNELRx8hMDTGQ1vt5H3sjQnWKAb5bwbJnk=) 2026-03-26 01:56:39.804139 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNUp+hESk7uzg35hbkvvf+NUU3AkgbCjVB2Zkr8YJaR+BFh9GPPoZZsfxubkFFdXWW3nEN+8mFY1PIXNSyE0jYU=) 2026-03-26 01:56:39.804148 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIC/uIeNDlNnWqqbf4nQ1iFXm8xqmTno9JV+7U0mLmoBx) 2026-03-26 01:56:39.804158 | orchestrator | 2026-03-26 01:56:39.804168 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-26 01:56:39.804177 | orchestrator | Thursday 26 March 2026 01:56:37 +0000 (0:00:01.197) 0:00:28.228 ******** 2026-03-26 01:56:39.804207 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCgKIZ+Izl5D9pHv6Kxhr5AyZEEKn9/BK9MNhQb/QhaO78ATvWpiJlyn30Si4IfRm0v7dCgB0sW8NB5CvAGq/ZGVQE3hr4tNBZrraVdJW4ItePEAsUdTBKwaGp9SU7bwlkZDWNr4EKmSRIefON4mC+Ybn4LJccP5VfAIqMv7w/fcREx9xmgnP7OprQLP3T35T1ObQBXrDu4nOSoxOUa3eeTBzb0M2JXK31R8MAkfBpl3xt/3e/97jB5oOcK9t9kbC3QwZmIYAcx+XCSmxyQ7kcR5PTtZoBVwzPXqPMU0tCc0UXDEJRnsTrXIJ0wArclmDuuIyLundoxRn57nEmHByjdHzJ5u/RqZYNJHs0LWyPDC7FOQ2ek+xnsnzjizES2z96sVFk385BUNTp+/Iv74/8jOP7Glx2+Q3TQc5BCM45SISAWv/C65y24IJLwG5hfzk0+g1I/moUPPDThzsC9weUzFs4C0dDIYlckw90A0PCUIFSGzaLQtti2t3GTfb6oISU=) 2026-03-26 01:56:39.804218 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBK2OXpbGe9PH2FJGGvIa5N+853ic5OP4B+gsXUDRcJHyzkcI+mMK2k9rDDpbgobH0QeLp+WUIjUIHSmvkdzCtC4=) 2026-03-26 01:56:39.804228 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIK2b9SZZ/SJ6wJY/vZIQL6KvWIfJRQxK2nxhYfFkytaC) 2026-03-26 01:56:39.804238 | orchestrator | 2026-03-26 01:56:39.804247 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2026-03-26 01:56:39.804280 | orchestrator | Thursday 26 March 2026 01:56:38 +0000 (0:00:01.261) 0:00:29.489 ******** 2026-03-26 01:56:39.804307 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-03-26 01:56:39.804325 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-03-26 01:56:39.804340 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-03-26 01:56:39.804356 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-03-26 01:56:39.804371 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-03-26 01:56:39.804385 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-03-26 01:56:39.804400 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-03-26 01:56:39.804416 | orchestrator | skipping: [testbed-manager] 2026-03-26 01:56:39.804431 | orchestrator | 2026-03-26 01:56:39.804470 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2026-03-26 01:56:39.804490 | orchestrator | Thursday 26 March 2026 01:56:38 +0000 (0:00:00.166) 0:00:29.656 ******** 2026-03-26 01:56:39.804506 | orchestrator | skipping: [testbed-manager] 2026-03-26 01:56:39.804523 | orchestrator | 2026-03-26 01:56:39.804538 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2026-03-26 01:56:39.804551 | orchestrator | Thursday 26 March 2026 01:56:38 +0000 (0:00:00.071) 0:00:29.728 ******** 2026-03-26 01:56:39.804570 | orchestrator | skipping: [testbed-manager] 2026-03-26 01:56:39.804581 | orchestrator | 2026-03-26 01:56:39.804598 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2026-03-26 01:56:39.804615 | orchestrator | Thursday 26 March 2026 01:56:38 +0000 (0:00:00.066) 0:00:29.794 ******** 2026-03-26 01:56:39.804632 | orchestrator | changed: [testbed-manager] 2026-03-26 01:56:39.804648 | orchestrator | 2026-03-26 01:56:39.804665 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-26 01:56:39.804682 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-26 01:56:39.804701 | orchestrator | 2026-03-26 01:56:39.804746 | orchestrator | 2026-03-26 01:56:39.804764 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-26 01:56:39.804782 | orchestrator | Thursday 26 March 2026 01:56:39 +0000 (0:00:00.807) 0:00:30.602 ******** 2026-03-26 01:56:39.804800 | orchestrator | =============================================================================== 2026-03-26 01:56:39.804816 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.47s 2026-03-26 01:56:39.804832 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.76s 2026-03-26 01:56:39.804842 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.26s 2026-03-26 01:56:39.804852 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.24s 2026-03-26 01:56:39.804862 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.22s 2026-03-26 01:56:39.804871 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.21s 2026-03-26 01:56:39.804881 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.21s 2026-03-26 01:56:39.804890 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.20s 2026-03-26 01:56:39.804899 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.20s 2026-03-26 01:56:39.804909 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.20s 2026-03-26 01:56:39.804919 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.19s 2026-03-26 01:56:39.804928 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.18s 2026-03-26 01:56:39.804937 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.18s 2026-03-26 01:56:39.804947 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.18s 2026-03-26 01:56:39.804967 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.14s 2026-03-26 01:56:39.804977 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2026-03-26 01:56:39.804986 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.81s 2026-03-26 01:56:39.804996 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.18s 2026-03-26 01:56:39.805006 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.18s 2026-03-26 01:56:39.805016 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.17s 2026-03-26 01:56:40.172588 | orchestrator | + osism apply squid 2026-03-26 01:56:52.319164 | orchestrator | 2026-03-26 01:56:52 | INFO  | Task 22181c16-c6bd-4755-9a52-705156a3718e (squid) was prepared for execution. 2026-03-26 01:56:52.319273 | orchestrator | 2026-03-26 01:56:52 | INFO  | It takes a moment until task 22181c16-c6bd-4755-9a52-705156a3718e (squid) has been started and output is visible here. 2026-03-26 01:58:52.219891 | orchestrator | 2026-03-26 01:58:52.219993 | orchestrator | PLAY [Apply role squid] ******************************************************** 2026-03-26 01:58:52.220009 | orchestrator | 2026-03-26 01:58:52.220022 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2026-03-26 01:58:52.220033 | orchestrator | Thursday 26 March 2026 01:56:56 +0000 (0:00:00.184) 0:00:00.184 ******** 2026-03-26 01:58:52.220044 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2026-03-26 01:58:52.220056 | orchestrator | 2026-03-26 01:58:52.220067 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2026-03-26 01:58:52.220078 | orchestrator | Thursday 26 March 2026 01:56:56 +0000 (0:00:00.108) 0:00:00.293 ******** 2026-03-26 01:58:52.220089 | orchestrator | ok: [testbed-manager] 2026-03-26 01:58:52.220118 | orchestrator | 2026-03-26 01:58:52.220139 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2026-03-26 01:58:52.220151 | orchestrator | Thursday 26 March 2026 01:56:58 +0000 (0:00:01.915) 0:00:02.208 ******** 2026-03-26 01:58:52.220163 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2026-03-26 01:58:52.220174 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2026-03-26 01:58:52.220184 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2026-03-26 01:58:52.220195 | orchestrator | 2026-03-26 01:58:52.220206 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2026-03-26 01:58:52.220217 | orchestrator | Thursday 26 March 2026 01:57:00 +0000 (0:00:01.319) 0:00:03.528 ******** 2026-03-26 01:58:52.220228 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2026-03-26 01:58:52.220239 | orchestrator | 2026-03-26 01:58:52.220250 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2026-03-26 01:58:52.220260 | orchestrator | Thursday 26 March 2026 01:57:01 +0000 (0:00:01.148) 0:00:04.676 ******** 2026-03-26 01:58:52.220271 | orchestrator | ok: [testbed-manager] 2026-03-26 01:58:52.220282 | orchestrator | 2026-03-26 01:58:52.220293 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2026-03-26 01:58:52.220304 | orchestrator | Thursday 26 March 2026 01:57:01 +0000 (0:00:00.386) 0:00:05.062 ******** 2026-03-26 01:58:52.220315 | orchestrator | changed: [testbed-manager] 2026-03-26 01:58:52.220327 | orchestrator | 2026-03-26 01:58:52.220338 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2026-03-26 01:58:52.220349 | orchestrator | Thursday 26 March 2026 01:57:02 +0000 (0:00:00.961) 0:00:06.023 ******** 2026-03-26 01:58:52.220360 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2026-03-26 01:58:52.220375 | orchestrator | ok: [testbed-manager] 2026-03-26 01:58:52.220387 | orchestrator | 2026-03-26 01:58:52.220397 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2026-03-26 01:58:52.220434 | orchestrator | Thursday 26 March 2026 01:57:39 +0000 (0:00:36.629) 0:00:42.653 ******** 2026-03-26 01:58:52.220447 | orchestrator | changed: [testbed-manager] 2026-03-26 01:58:52.220460 | orchestrator | 2026-03-26 01:58:52.220472 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2026-03-26 01:58:52.220484 | orchestrator | Thursday 26 March 2026 01:57:51 +0000 (0:00:12.112) 0:00:54.765 ******** 2026-03-26 01:58:52.220497 | orchestrator | Pausing for 60 seconds 2026-03-26 01:58:52.220510 | orchestrator | changed: [testbed-manager] 2026-03-26 01:58:52.220523 | orchestrator | 2026-03-26 01:58:52.220535 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2026-03-26 01:58:52.220548 | orchestrator | Thursday 26 March 2026 01:58:51 +0000 (0:01:00.088) 0:01:54.854 ******** 2026-03-26 01:58:52.220560 | orchestrator | ok: [testbed-manager] 2026-03-26 01:58:52.220573 | orchestrator | 2026-03-26 01:58:52.220586 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2026-03-26 01:58:52.220598 | orchestrator | Thursday 26 March 2026 01:58:51 +0000 (0:00:00.074) 0:01:54.928 ******** 2026-03-26 01:58:52.220611 | orchestrator | changed: [testbed-manager] 2026-03-26 01:58:52.220623 | orchestrator | 2026-03-26 01:58:52.220635 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-26 01:58:52.220648 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-26 01:58:52.220661 | orchestrator | 2026-03-26 01:58:52.220673 | orchestrator | 2026-03-26 01:58:52.220686 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-26 01:58:52.220698 | orchestrator | Thursday 26 March 2026 01:58:52 +0000 (0:00:00.567) 0:01:55.495 ******** 2026-03-26 01:58:52.220710 | orchestrator | =============================================================================== 2026-03-26 01:58:52.220739 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.09s 2026-03-26 01:58:52.220752 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 36.63s 2026-03-26 01:58:52.220765 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.11s 2026-03-26 01:58:52.220776 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.92s 2026-03-26 01:58:52.220787 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.32s 2026-03-26 01:58:52.220821 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.15s 2026-03-26 01:58:52.220833 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.96s 2026-03-26 01:58:52.220843 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.57s 2026-03-26 01:58:52.220854 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.39s 2026-03-26 01:58:52.220865 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.11s 2026-03-26 01:58:52.220875 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.07s 2026-03-26 01:58:52.437079 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-03-26 01:58:52.437542 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-03-26 01:58:52.495910 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-26 01:58:52.496009 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla/release 2026-03-26 01:58:52.501744 | orchestrator | + set -e 2026-03-26 01:58:52.501841 | orchestrator | + NAMESPACE=kolla/release 2026-03-26 01:58:52.501864 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla/release#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-03-26 01:58:52.509685 | orchestrator | ++ semver 9.5.0 9.0.0 2026-03-26 01:58:52.586367 | orchestrator | + [[ 1 -lt 0 ]] 2026-03-26 01:58:52.586695 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2026-03-26 01:59:04.546290 | orchestrator | 2026-03-26 01:59:04 | INFO  | Task b14432b8-054d-4f4f-bffe-fa6ab43829d4 (operator) was prepared for execution. 2026-03-26 01:59:04.546406 | orchestrator | 2026-03-26 01:59:04 | INFO  | It takes a moment until task b14432b8-054d-4f4f-bffe-fa6ab43829d4 (operator) has been started and output is visible here. 2026-03-26 01:59:20.813229 | orchestrator | 2026-03-26 01:59:20.813341 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2026-03-26 01:59:20.813356 | orchestrator | 2026-03-26 01:59:20.813365 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-26 01:59:20.813375 | orchestrator | Thursday 26 March 2026 01:59:08 +0000 (0:00:00.152) 0:00:00.152 ******** 2026-03-26 01:59:20.813384 | orchestrator | ok: [testbed-node-0] 2026-03-26 01:59:20.813419 | orchestrator | ok: [testbed-node-5] 2026-03-26 01:59:20.813429 | orchestrator | ok: [testbed-node-2] 2026-03-26 01:59:20.813437 | orchestrator | ok: [testbed-node-1] 2026-03-26 01:59:20.813446 | orchestrator | ok: [testbed-node-3] 2026-03-26 01:59:20.813455 | orchestrator | ok: [testbed-node-4] 2026-03-26 01:59:20.813463 | orchestrator | 2026-03-26 01:59:20.813472 | orchestrator | TASK [Do not require tty for all users] **************************************** 2026-03-26 01:59:20.813481 | orchestrator | Thursday 26 March 2026 01:59:12 +0000 (0:00:03.324) 0:00:03.477 ******** 2026-03-26 01:59:20.813490 | orchestrator | ok: [testbed-node-5] 2026-03-26 01:59:20.813499 | orchestrator | ok: [testbed-node-3] 2026-03-26 01:59:20.813507 | orchestrator | ok: [testbed-node-1] 2026-03-26 01:59:20.813534 | orchestrator | ok: [testbed-node-4] 2026-03-26 01:59:20.813549 | orchestrator | ok: [testbed-node-2] 2026-03-26 01:59:20.813562 | orchestrator | ok: [testbed-node-0] 2026-03-26 01:59:20.813575 | orchestrator | 2026-03-26 01:59:20.813588 | orchestrator | PLAY [Apply role operator] ***************************************************** 2026-03-26 01:59:20.813603 | orchestrator | 2026-03-26 01:59:20.813619 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-03-26 01:59:20.813635 | orchestrator | Thursday 26 March 2026 01:59:12 +0000 (0:00:00.753) 0:00:04.230 ******** 2026-03-26 01:59:20.813649 | orchestrator | ok: [testbed-node-0] 2026-03-26 01:59:20.813665 | orchestrator | ok: [testbed-node-1] 2026-03-26 01:59:20.813674 | orchestrator | ok: [testbed-node-2] 2026-03-26 01:59:20.813682 | orchestrator | ok: [testbed-node-3] 2026-03-26 01:59:20.813691 | orchestrator | ok: [testbed-node-4] 2026-03-26 01:59:20.813701 | orchestrator | ok: [testbed-node-5] 2026-03-26 01:59:20.813709 | orchestrator | 2026-03-26 01:59:20.813718 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-03-26 01:59:20.813727 | orchestrator | Thursday 26 March 2026 01:59:13 +0000 (0:00:00.195) 0:00:04.425 ******** 2026-03-26 01:59:20.813736 | orchestrator | ok: [testbed-node-0] 2026-03-26 01:59:20.813746 | orchestrator | ok: [testbed-node-1] 2026-03-26 01:59:20.813756 | orchestrator | ok: [testbed-node-2] 2026-03-26 01:59:20.813766 | orchestrator | ok: [testbed-node-3] 2026-03-26 01:59:20.813776 | orchestrator | ok: [testbed-node-4] 2026-03-26 01:59:20.813786 | orchestrator | ok: [testbed-node-5] 2026-03-26 01:59:20.813796 | orchestrator | 2026-03-26 01:59:20.813831 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-03-26 01:59:20.813848 | orchestrator | Thursday 26 March 2026 01:59:13 +0000 (0:00:00.195) 0:00:04.621 ******** 2026-03-26 01:59:20.813859 | orchestrator | changed: [testbed-node-0] 2026-03-26 01:59:20.813870 | orchestrator | changed: [testbed-node-4] 2026-03-26 01:59:20.813880 | orchestrator | changed: [testbed-node-2] 2026-03-26 01:59:20.813890 | orchestrator | changed: [testbed-node-1] 2026-03-26 01:59:20.813900 | orchestrator | changed: [testbed-node-3] 2026-03-26 01:59:20.813910 | orchestrator | changed: [testbed-node-5] 2026-03-26 01:59:20.813920 | orchestrator | 2026-03-26 01:59:20.813929 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-03-26 01:59:20.813938 | orchestrator | Thursday 26 March 2026 01:59:13 +0000 (0:00:00.624) 0:00:05.246 ******** 2026-03-26 01:59:20.813947 | orchestrator | changed: [testbed-node-3] 2026-03-26 01:59:20.813955 | orchestrator | changed: [testbed-node-2] 2026-03-26 01:59:20.813964 | orchestrator | changed: [testbed-node-1] 2026-03-26 01:59:20.813973 | orchestrator | changed: [testbed-node-4] 2026-03-26 01:59:20.813981 | orchestrator | changed: [testbed-node-0] 2026-03-26 01:59:20.813990 | orchestrator | changed: [testbed-node-5] 2026-03-26 01:59:20.814072 | orchestrator | 2026-03-26 01:59:20.814082 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-03-26 01:59:20.814091 | orchestrator | Thursday 26 March 2026 01:59:14 +0000 (0:00:00.874) 0:00:06.120 ******** 2026-03-26 01:59:20.814100 | orchestrator | changed: [testbed-node-0] => (item=adm) 2026-03-26 01:59:20.814109 | orchestrator | changed: [testbed-node-2] => (item=adm) 2026-03-26 01:59:20.814118 | orchestrator | changed: [testbed-node-1] => (item=adm) 2026-03-26 01:59:20.814127 | orchestrator | changed: [testbed-node-3] => (item=adm) 2026-03-26 01:59:20.814136 | orchestrator | changed: [testbed-node-5] => (item=adm) 2026-03-26 01:59:20.814144 | orchestrator | changed: [testbed-node-4] => (item=adm) 2026-03-26 01:59:20.814153 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2026-03-26 01:59:20.814162 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2026-03-26 01:59:20.814170 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2026-03-26 01:59:20.814179 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2026-03-26 01:59:20.814187 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2026-03-26 01:59:20.814196 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2026-03-26 01:59:20.814210 | orchestrator | 2026-03-26 01:59:20.814225 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-03-26 01:59:20.814239 | orchestrator | Thursday 26 March 2026 01:59:15 +0000 (0:00:01.247) 0:00:07.367 ******** 2026-03-26 01:59:20.814253 | orchestrator | changed: [testbed-node-3] 2026-03-26 01:59:20.814266 | orchestrator | changed: [testbed-node-0] 2026-03-26 01:59:20.814280 | orchestrator | changed: [testbed-node-5] 2026-03-26 01:59:20.814293 | orchestrator | changed: [testbed-node-2] 2026-03-26 01:59:20.814307 | orchestrator | changed: [testbed-node-4] 2026-03-26 01:59:20.814322 | orchestrator | changed: [testbed-node-1] 2026-03-26 01:59:20.814336 | orchestrator | 2026-03-26 01:59:20.814351 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-03-26 01:59:20.814367 | orchestrator | Thursday 26 March 2026 01:59:17 +0000 (0:00:01.281) 0:00:08.648 ******** 2026-03-26 01:59:20.814382 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2026-03-26 01:59:20.814396 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2026-03-26 01:59:20.814411 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2026-03-26 01:59:20.814425 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2026-03-26 01:59:20.814463 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2026-03-26 01:59:20.814480 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2026-03-26 01:59:20.814496 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2026-03-26 01:59:20.814510 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2026-03-26 01:59:20.814525 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2026-03-26 01:59:20.814534 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2026-03-26 01:59:20.814543 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2026-03-26 01:59:20.814552 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2026-03-26 01:59:20.814561 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2026-03-26 01:59:20.814569 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2026-03-26 01:59:20.814578 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2026-03-26 01:59:20.814592 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2026-03-26 01:59:20.814606 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2026-03-26 01:59:20.814620 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2026-03-26 01:59:20.814636 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2026-03-26 01:59:20.814651 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2026-03-26 01:59:20.814680 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2026-03-26 01:59:20.814689 | orchestrator | 2026-03-26 01:59:20.814698 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-03-26 01:59:20.814708 | orchestrator | Thursday 26 March 2026 01:59:18 +0000 (0:00:01.301) 0:00:09.950 ******** 2026-03-26 01:59:20.814717 | orchestrator | skipping: [testbed-node-0] 2026-03-26 01:59:20.814726 | orchestrator | skipping: [testbed-node-1] 2026-03-26 01:59:20.814734 | orchestrator | skipping: [testbed-node-2] 2026-03-26 01:59:20.814743 | orchestrator | skipping: [testbed-node-3] 2026-03-26 01:59:20.814752 | orchestrator | skipping: [testbed-node-4] 2026-03-26 01:59:20.814760 | orchestrator | skipping: [testbed-node-5] 2026-03-26 01:59:20.814769 | orchestrator | 2026-03-26 01:59:20.814778 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-03-26 01:59:20.814786 | orchestrator | Thursday 26 March 2026 01:59:18 +0000 (0:00:00.198) 0:00:10.148 ******** 2026-03-26 01:59:20.814795 | orchestrator | skipping: [testbed-node-0] 2026-03-26 01:59:20.814803 | orchestrator | skipping: [testbed-node-1] 2026-03-26 01:59:20.814871 | orchestrator | skipping: [testbed-node-2] 2026-03-26 01:59:20.814881 | orchestrator | skipping: [testbed-node-3] 2026-03-26 01:59:20.814889 | orchestrator | skipping: [testbed-node-4] 2026-03-26 01:59:20.814898 | orchestrator | skipping: [testbed-node-5] 2026-03-26 01:59:20.814907 | orchestrator | 2026-03-26 01:59:20.814915 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-03-26 01:59:20.814924 | orchestrator | Thursday 26 March 2026 01:59:18 +0000 (0:00:00.187) 0:00:10.336 ******** 2026-03-26 01:59:20.814933 | orchestrator | changed: [testbed-node-0] 2026-03-26 01:59:20.814941 | orchestrator | changed: [testbed-node-5] 2026-03-26 01:59:20.814950 | orchestrator | changed: [testbed-node-3] 2026-03-26 01:59:20.814959 | orchestrator | changed: [testbed-node-2] 2026-03-26 01:59:20.814968 | orchestrator | changed: [testbed-node-4] 2026-03-26 01:59:20.814976 | orchestrator | changed: [testbed-node-1] 2026-03-26 01:59:20.814985 | orchestrator | 2026-03-26 01:59:20.814993 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-03-26 01:59:20.815002 | orchestrator | Thursday 26 March 2026 01:59:19 +0000 (0:00:00.585) 0:00:10.921 ******** 2026-03-26 01:59:20.815010 | orchestrator | skipping: [testbed-node-0] 2026-03-26 01:59:20.815019 | orchestrator | skipping: [testbed-node-1] 2026-03-26 01:59:20.815027 | orchestrator | skipping: [testbed-node-2] 2026-03-26 01:59:20.815036 | orchestrator | skipping: [testbed-node-3] 2026-03-26 01:59:20.815057 | orchestrator | skipping: [testbed-node-4] 2026-03-26 01:59:20.815066 | orchestrator | skipping: [testbed-node-5] 2026-03-26 01:59:20.815074 | orchestrator | 2026-03-26 01:59:20.815083 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-03-26 01:59:20.815092 | orchestrator | Thursday 26 March 2026 01:59:19 +0000 (0:00:00.184) 0:00:11.105 ******** 2026-03-26 01:59:20.815100 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-26 01:59:20.815109 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-03-26 01:59:20.815118 | orchestrator | changed: [testbed-node-3] 2026-03-26 01:59:20.815126 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-26 01:59:20.815135 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-26 01:59:20.815143 | orchestrator | changed: [testbed-node-0] 2026-03-26 01:59:20.815152 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-26 01:59:20.815160 | orchestrator | changed: [testbed-node-2] 2026-03-26 01:59:20.815169 | orchestrator | changed: [testbed-node-5] 2026-03-26 01:59:20.815177 | orchestrator | changed: [testbed-node-4] 2026-03-26 01:59:20.815186 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-03-26 01:59:20.815194 | orchestrator | changed: [testbed-node-1] 2026-03-26 01:59:20.815203 | orchestrator | 2026-03-26 01:59:20.815211 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-03-26 01:59:20.815220 | orchestrator | Thursday 26 March 2026 01:59:20 +0000 (0:00:00.722) 0:00:11.828 ******** 2026-03-26 01:59:20.815235 | orchestrator | skipping: [testbed-node-0] 2026-03-26 01:59:20.815244 | orchestrator | skipping: [testbed-node-1] 2026-03-26 01:59:20.815252 | orchestrator | skipping: [testbed-node-2] 2026-03-26 01:59:20.815261 | orchestrator | skipping: [testbed-node-3] 2026-03-26 01:59:20.815269 | orchestrator | skipping: [testbed-node-4] 2026-03-26 01:59:20.815278 | orchestrator | skipping: [testbed-node-5] 2026-03-26 01:59:20.815286 | orchestrator | 2026-03-26 01:59:20.815295 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-03-26 01:59:20.815303 | orchestrator | Thursday 26 March 2026 01:59:20 +0000 (0:00:00.177) 0:00:12.006 ******** 2026-03-26 01:59:20.815312 | orchestrator | skipping: [testbed-node-0] 2026-03-26 01:59:20.815320 | orchestrator | skipping: [testbed-node-1] 2026-03-26 01:59:20.815329 | orchestrator | skipping: [testbed-node-2] 2026-03-26 01:59:20.815337 | orchestrator | skipping: [testbed-node-3] 2026-03-26 01:59:20.815354 | orchestrator | skipping: [testbed-node-4] 2026-03-26 01:59:22.181864 | orchestrator | skipping: [testbed-node-5] 2026-03-26 01:59:22.181938 | orchestrator | 2026-03-26 01:59:22.181944 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-03-26 01:59:22.181951 | orchestrator | Thursday 26 March 2026 01:59:20 +0000 (0:00:00.177) 0:00:12.183 ******** 2026-03-26 01:59:22.181955 | orchestrator | skipping: [testbed-node-0] 2026-03-26 01:59:22.181959 | orchestrator | skipping: [testbed-node-1] 2026-03-26 01:59:22.181963 | orchestrator | skipping: [testbed-node-2] 2026-03-26 01:59:22.181967 | orchestrator | skipping: [testbed-node-3] 2026-03-26 01:59:22.181971 | orchestrator | skipping: [testbed-node-4] 2026-03-26 01:59:22.181974 | orchestrator | skipping: [testbed-node-5] 2026-03-26 01:59:22.181978 | orchestrator | 2026-03-26 01:59:22.181982 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-03-26 01:59:22.181986 | orchestrator | Thursday 26 March 2026 01:59:20 +0000 (0:00:00.168) 0:00:12.352 ******** 2026-03-26 01:59:22.181990 | orchestrator | changed: [testbed-node-0] 2026-03-26 01:59:22.181994 | orchestrator | changed: [testbed-node-1] 2026-03-26 01:59:22.182071 | orchestrator | changed: [testbed-node-2] 2026-03-26 01:59:22.182077 | orchestrator | changed: [testbed-node-3] 2026-03-26 01:59:22.182081 | orchestrator | changed: [testbed-node-4] 2026-03-26 01:59:22.182085 | orchestrator | changed: [testbed-node-5] 2026-03-26 01:59:22.182089 | orchestrator | 2026-03-26 01:59:22.182092 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-03-26 01:59:22.182096 | orchestrator | Thursday 26 March 2026 01:59:21 +0000 (0:00:00.650) 0:00:13.003 ******** 2026-03-26 01:59:22.182100 | orchestrator | skipping: [testbed-node-0] 2026-03-26 01:59:22.182103 | orchestrator | skipping: [testbed-node-1] 2026-03-26 01:59:22.182108 | orchestrator | skipping: [testbed-node-2] 2026-03-26 01:59:22.182111 | orchestrator | skipping: [testbed-node-3] 2026-03-26 01:59:22.182115 | orchestrator | skipping: [testbed-node-4] 2026-03-26 01:59:22.182119 | orchestrator | skipping: [testbed-node-5] 2026-03-26 01:59:22.182122 | orchestrator | 2026-03-26 01:59:22.182126 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-26 01:59:22.182131 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-26 01:59:22.182137 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-26 01:59:22.182141 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-26 01:59:22.182144 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-26 01:59:22.182148 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-26 01:59:22.182166 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-26 01:59:22.182170 | orchestrator | 2026-03-26 01:59:22.182174 | orchestrator | 2026-03-26 01:59:22.182178 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-26 01:59:22.182182 | orchestrator | Thursday 26 March 2026 01:59:21 +0000 (0:00:00.269) 0:00:13.273 ******** 2026-03-26 01:59:22.182185 | orchestrator | =============================================================================== 2026-03-26 01:59:22.182189 | orchestrator | Gathering Facts --------------------------------------------------------- 3.32s 2026-03-26 01:59:22.182193 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.30s 2026-03-26 01:59:22.182198 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.28s 2026-03-26 01:59:22.182201 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.25s 2026-03-26 01:59:22.182205 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.87s 2026-03-26 01:59:22.182209 | orchestrator | Do not require tty for all users ---------------------------------------- 0.75s 2026-03-26 01:59:22.182212 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.72s 2026-03-26 01:59:22.182216 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.65s 2026-03-26 01:59:22.182220 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.62s 2026-03-26 01:59:22.182223 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.59s 2026-03-26 01:59:22.182227 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.27s 2026-03-26 01:59:22.182231 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.20s 2026-03-26 01:59:22.182235 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.20s 2026-03-26 01:59:22.182238 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.20s 2026-03-26 01:59:22.182242 | orchestrator | osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file --- 0.19s 2026-03-26 01:59:22.182246 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.18s 2026-03-26 01:59:22.182250 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.18s 2026-03-26 01:59:22.182253 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.18s 2026-03-26 01:59:22.182257 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.17s 2026-03-26 01:59:22.558495 | orchestrator | + osism apply --environment custom facts 2026-03-26 01:59:24.312267 | orchestrator | 2026-03-26 01:59:24 | INFO  | Trying to run play facts in environment custom 2026-03-26 01:59:34.391029 | orchestrator | 2026-03-26 01:59:34 | INFO  | Task 5fcfbccf-d31a-4b57-955a-42b16c057297 (facts) was prepared for execution. 2026-03-26 01:59:34.391120 | orchestrator | 2026-03-26 01:59:34 | INFO  | It takes a moment until task 5fcfbccf-d31a-4b57-955a-42b16c057297 (facts) has been started and output is visible here. 2026-03-26 02:00:18.654090 | orchestrator | 2026-03-26 02:00:18.654175 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2026-03-26 02:00:18.654185 | orchestrator | 2026-03-26 02:00:18.654191 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-03-26 02:00:18.654197 | orchestrator | Thursday 26 March 2026 01:59:38 +0000 (0:00:00.093) 0:00:00.093 ******** 2026-03-26 02:00:18.654203 | orchestrator | ok: [testbed-manager] 2026-03-26 02:00:18.654209 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:00:18.654215 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:00:18.654220 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:00:18.654225 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:00:18.654231 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:00:18.654252 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:00:18.654258 | orchestrator | 2026-03-26 02:00:18.654263 | orchestrator | TASK [Copy fact file] ********************************************************** 2026-03-26 02:00:18.654269 | orchestrator | Thursday 26 March 2026 01:59:40 +0000 (0:00:01.381) 0:00:01.475 ******** 2026-03-26 02:00:18.654274 | orchestrator | ok: [testbed-manager] 2026-03-26 02:00:18.654279 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:00:18.654284 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:00:18.654289 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:00:18.654295 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:00:18.654300 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:00:18.654305 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:00:18.654310 | orchestrator | 2026-03-26 02:00:18.654315 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2026-03-26 02:00:18.654320 | orchestrator | 2026-03-26 02:00:18.654325 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-03-26 02:00:18.654330 | orchestrator | Thursday 26 March 2026 01:59:41 +0000 (0:00:01.181) 0:00:02.657 ******** 2026-03-26 02:00:18.654335 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:00:18.654341 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:00:18.654346 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:00:18.654351 | orchestrator | 2026-03-26 02:00:18.654356 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-03-26 02:00:18.654362 | orchestrator | Thursday 26 March 2026 01:59:41 +0000 (0:00:00.110) 0:00:02.767 ******** 2026-03-26 02:00:18.654367 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:00:18.654372 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:00:18.654377 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:00:18.654382 | orchestrator | 2026-03-26 02:00:18.654387 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-03-26 02:00:18.654392 | orchestrator | Thursday 26 March 2026 01:59:41 +0000 (0:00:00.205) 0:00:02.973 ******** 2026-03-26 02:00:18.654397 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:00:18.654402 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:00:18.654407 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:00:18.654412 | orchestrator | 2026-03-26 02:00:18.654417 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-03-26 02:00:18.654423 | orchestrator | Thursday 26 March 2026 01:59:41 +0000 (0:00:00.238) 0:00:03.211 ******** 2026-03-26 02:00:18.654429 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-26 02:00:18.654436 | orchestrator | 2026-03-26 02:00:18.654441 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-03-26 02:00:18.654446 | orchestrator | Thursday 26 March 2026 01:59:41 +0000 (0:00:00.152) 0:00:03.363 ******** 2026-03-26 02:00:18.654451 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:00:18.654456 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:00:18.654461 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:00:18.654479 | orchestrator | 2026-03-26 02:00:18.654485 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-03-26 02:00:18.654497 | orchestrator | Thursday 26 March 2026 01:59:42 +0000 (0:00:00.418) 0:00:03.782 ******** 2026-03-26 02:00:18.654502 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:00:18.654507 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:00:18.654512 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:00:18.654517 | orchestrator | 2026-03-26 02:00:18.654522 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-03-26 02:00:18.654527 | orchestrator | Thursday 26 March 2026 01:59:42 +0000 (0:00:00.139) 0:00:03.921 ******** 2026-03-26 02:00:18.654532 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:00:18.654537 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:00:18.654542 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:00:18.654547 | orchestrator | 2026-03-26 02:00:18.654553 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-03-26 02:00:18.654562 | orchestrator | Thursday 26 March 2026 01:59:43 +0000 (0:00:01.063) 0:00:04.985 ******** 2026-03-26 02:00:18.654567 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:00:18.654572 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:00:18.654577 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:00:18.654582 | orchestrator | 2026-03-26 02:00:18.654587 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-03-26 02:00:18.654627 | orchestrator | Thursday 26 March 2026 01:59:43 +0000 (0:00:00.474) 0:00:05.459 ******** 2026-03-26 02:00:18.654635 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:00:18.654644 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:00:18.654652 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:00:18.654664 | orchestrator | 2026-03-26 02:00:18.654676 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-03-26 02:00:18.654683 | orchestrator | Thursday 26 March 2026 01:59:45 +0000 (0:00:01.094) 0:00:06.554 ******** 2026-03-26 02:00:18.654692 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:00:18.654700 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:00:18.654708 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:00:18.654716 | orchestrator | 2026-03-26 02:00:18.654725 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2026-03-26 02:00:18.654732 | orchestrator | Thursday 26 March 2026 02:00:01 +0000 (0:00:16.562) 0:00:23.116 ******** 2026-03-26 02:00:18.654740 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:00:18.654748 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:00:18.654757 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:00:18.654765 | orchestrator | 2026-03-26 02:00:18.654774 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2026-03-26 02:00:18.654800 | orchestrator | Thursday 26 March 2026 02:00:01 +0000 (0:00:00.105) 0:00:23.221 ******** 2026-03-26 02:00:18.654818 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:00:18.654826 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:00:18.654834 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:00:18.654860 | orchestrator | 2026-03-26 02:00:18.654869 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-03-26 02:00:18.654885 | orchestrator | Thursday 26 March 2026 02:00:09 +0000 (0:00:07.791) 0:00:31.013 ******** 2026-03-26 02:00:18.654893 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:00:18.654901 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:00:18.654909 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:00:18.654917 | orchestrator | 2026-03-26 02:00:18.654925 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-03-26 02:00:18.654933 | orchestrator | Thursday 26 March 2026 02:00:10 +0000 (0:00:00.457) 0:00:31.470 ******** 2026-03-26 02:00:18.654941 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2026-03-26 02:00:18.654950 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2026-03-26 02:00:18.654959 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2026-03-26 02:00:18.654967 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2026-03-26 02:00:18.654975 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2026-03-26 02:00:18.654983 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2026-03-26 02:00:18.654992 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2026-03-26 02:00:18.655000 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2026-03-26 02:00:18.655008 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2026-03-26 02:00:18.655015 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2026-03-26 02:00:18.655021 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2026-03-26 02:00:18.655026 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2026-03-26 02:00:18.655031 | orchestrator | 2026-03-26 02:00:18.655036 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-03-26 02:00:18.655048 | orchestrator | Thursday 26 March 2026 02:00:13 +0000 (0:00:03.462) 0:00:34.933 ******** 2026-03-26 02:00:18.655053 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:00:18.655058 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:00:18.655063 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:00:18.655069 | orchestrator | 2026-03-26 02:00:18.655074 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-26 02:00:18.655079 | orchestrator | 2026-03-26 02:00:18.655084 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-26 02:00:18.655089 | orchestrator | Thursday 26 March 2026 02:00:14 +0000 (0:00:01.500) 0:00:36.434 ******** 2026-03-26 02:00:18.655094 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:00:18.655099 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:00:18.655104 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:00:18.655109 | orchestrator | ok: [testbed-manager] 2026-03-26 02:00:18.655115 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:00:18.655120 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:00:18.655125 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:00:18.655130 | orchestrator | 2026-03-26 02:00:18.655135 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-26 02:00:18.655141 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-26 02:00:18.655147 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-26 02:00:18.655153 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-26 02:00:18.655158 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-26 02:00:18.655164 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-26 02:00:18.655170 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-26 02:00:18.655175 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-26 02:00:18.655180 | orchestrator | 2026-03-26 02:00:18.655185 | orchestrator | 2026-03-26 02:00:18.655190 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-26 02:00:18.655195 | orchestrator | Thursday 26 March 2026 02:00:18 +0000 (0:00:03.653) 0:00:40.088 ******** 2026-03-26 02:00:18.655200 | orchestrator | =============================================================================== 2026-03-26 02:00:18.655205 | orchestrator | osism.commons.repository : Update package cache ------------------------ 16.56s 2026-03-26 02:00:18.655210 | orchestrator | Install required packages (Debian) -------------------------------------- 7.79s 2026-03-26 02:00:18.655215 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.65s 2026-03-26 02:00:18.655220 | orchestrator | Copy fact files --------------------------------------------------------- 3.46s 2026-03-26 02:00:18.655225 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.50s 2026-03-26 02:00:18.655231 | orchestrator | Create custom facts directory ------------------------------------------- 1.38s 2026-03-26 02:00:18.655243 | orchestrator | Copy fact file ---------------------------------------------------------- 1.18s 2026-03-26 02:00:18.944764 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.09s 2026-03-26 02:00:18.944936 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.06s 2026-03-26 02:00:18.944968 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.47s 2026-03-26 02:00:18.944979 | orchestrator | Create custom facts directory ------------------------------------------- 0.46s 2026-03-26 02:00:18.945007 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.42s 2026-03-26 02:00:18.945016 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.24s 2026-03-26 02:00:18.945025 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.21s 2026-03-26 02:00:18.945034 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.15s 2026-03-26 02:00:18.945043 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.14s 2026-03-26 02:00:18.945052 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.11s 2026-03-26 02:00:18.945061 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.11s 2026-03-26 02:00:19.320213 | orchestrator | + osism apply bootstrap 2026-03-26 02:00:31.597397 | orchestrator | 2026-03-26 02:00:31 | INFO  | Task 4680eed5-a920-43b6-870f-1050432d97e6 (bootstrap) was prepared for execution. 2026-03-26 02:00:31.597504 | orchestrator | 2026-03-26 02:00:31 | INFO  | It takes a moment until task 4680eed5-a920-43b6-870f-1050432d97e6 (bootstrap) has been started and output is visible here. 2026-03-26 02:00:48.025525 | orchestrator | 2026-03-26 02:00:48.025645 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2026-03-26 02:00:48.025663 | orchestrator | 2026-03-26 02:00:48.025675 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2026-03-26 02:00:48.025687 | orchestrator | Thursday 26 March 2026 02:00:36 +0000 (0:00:00.171) 0:00:00.171 ******** 2026-03-26 02:00:48.025698 | orchestrator | ok: [testbed-manager] 2026-03-26 02:00:48.025711 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:00:48.025721 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:00:48.025732 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:00:48.025743 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:00:48.025753 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:00:48.025764 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:00:48.025775 | orchestrator | 2026-03-26 02:00:48.025786 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-26 02:00:48.025797 | orchestrator | 2026-03-26 02:00:48.025808 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-26 02:00:48.025819 | orchestrator | Thursday 26 March 2026 02:00:36 +0000 (0:00:00.288) 0:00:00.459 ******** 2026-03-26 02:00:48.025830 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:00:48.025840 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:00:48.025851 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:00:48.025909 | orchestrator | ok: [testbed-manager] 2026-03-26 02:00:48.025922 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:00:48.025933 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:00:48.025943 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:00:48.025954 | orchestrator | 2026-03-26 02:00:48.025965 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2026-03-26 02:00:48.025976 | orchestrator | 2026-03-26 02:00:48.025987 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-26 02:00:48.025998 | orchestrator | Thursday 26 March 2026 02:00:40 +0000 (0:00:03.598) 0:00:04.058 ******** 2026-03-26 02:00:48.026010 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-03-26 02:00:48.026083 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-03-26 02:00:48.026098 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-03-26 02:00:48.026111 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2026-03-26 02:00:48.026125 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-03-26 02:00:48.026138 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-26 02:00:48.026152 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-03-26 02:00:48.026164 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2026-03-26 02:00:48.026206 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-26 02:00:48.026259 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-03-26 02:00:48.026274 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-26 02:00:48.026288 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-26 02:00:48.026301 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-03-26 02:00:48.026314 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2026-03-26 02:00:48.026327 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-26 02:00:48.026339 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-26 02:00:48.026350 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-26 02:00:48.026360 | orchestrator | skipping: [testbed-manager] 2026-03-26 02:00:48.026371 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-26 02:00:48.026382 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-26 02:00:48.026393 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-26 02:00:48.026404 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-26 02:00:48.026414 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-26 02:00:48.026425 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-26 02:00:48.026436 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-26 02:00:48.026446 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-26 02:00:48.026457 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-26 02:00:48.026468 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2026-03-26 02:00:48.026478 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:00:48.026489 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:00:48.026500 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2026-03-26 02:00:48.026511 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-26 02:00:48.026522 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-03-26 02:00:48.026533 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-03-26 02:00:48.026543 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-26 02:00:48.026554 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:00:48.026565 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-03-26 02:00:48.026576 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-03-26 02:00:48.026586 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-03-26 02:00:48.026597 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-03-26 02:00:48.026608 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-26 02:00:48.026619 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-26 02:00:48.026630 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-26 02:00:48.026640 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-26 02:00:48.026651 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2026-03-26 02:00:48.026662 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-26 02:00:48.026673 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:00:48.026703 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-26 02:00:48.026714 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:00:48.026725 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-03-26 02:00:48.026753 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-03-26 02:00:48.026764 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-03-26 02:00:48.026775 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-26 02:00:48.026786 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-26 02:00:48.026806 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-26 02:00:48.026817 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:00:48.026827 | orchestrator | 2026-03-26 02:00:48.026839 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2026-03-26 02:00:48.026849 | orchestrator | 2026-03-26 02:00:48.026948 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2026-03-26 02:00:48.026963 | orchestrator | Thursday 26 March 2026 02:00:40 +0000 (0:00:00.675) 0:00:04.734 ******** 2026-03-26 02:00:48.026974 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:00:48.026985 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:00:48.026996 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:00:48.027007 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:00:48.027017 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:00:48.027028 | orchestrator | ok: [testbed-manager] 2026-03-26 02:00:48.027039 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:00:48.027049 | orchestrator | 2026-03-26 02:00:48.027060 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2026-03-26 02:00:48.027071 | orchestrator | Thursday 26 March 2026 02:00:41 +0000 (0:00:01.214) 0:00:05.949 ******** 2026-03-26 02:00:48.027082 | orchestrator | ok: [testbed-manager] 2026-03-26 02:00:48.027093 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:00:48.027103 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:00:48.027114 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:00:48.027124 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:00:48.027135 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:00:48.027145 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:00:48.027156 | orchestrator | 2026-03-26 02:00:48.027167 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2026-03-26 02:00:48.027178 | orchestrator | Thursday 26 March 2026 02:00:43 +0000 (0:00:01.195) 0:00:07.145 ******** 2026-03-26 02:00:48.027190 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 02:00:48.027203 | orchestrator | 2026-03-26 02:00:48.027214 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2026-03-26 02:00:48.027225 | orchestrator | Thursday 26 March 2026 02:00:43 +0000 (0:00:00.304) 0:00:07.450 ******** 2026-03-26 02:00:48.027236 | orchestrator | changed: [testbed-manager] 2026-03-26 02:00:48.027247 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:00:48.027258 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:00:48.027269 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:00:48.027280 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:00:48.027290 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:00:48.027301 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:00:48.027312 | orchestrator | 2026-03-26 02:00:48.027323 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2026-03-26 02:00:48.027334 | orchestrator | Thursday 26 March 2026 02:00:45 +0000 (0:00:02.118) 0:00:09.568 ******** 2026-03-26 02:00:48.027345 | orchestrator | skipping: [testbed-manager] 2026-03-26 02:00:48.027357 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 02:00:48.027370 | orchestrator | 2026-03-26 02:00:48.027381 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2026-03-26 02:00:48.027392 | orchestrator | Thursday 26 March 2026 02:00:45 +0000 (0:00:00.305) 0:00:09.874 ******** 2026-03-26 02:00:48.027403 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:00:48.027414 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:00:48.027425 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:00:48.027436 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:00:48.027446 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:00:48.027457 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:00:48.027476 | orchestrator | 2026-03-26 02:00:48.027493 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2026-03-26 02:00:48.027504 | orchestrator | Thursday 26 March 2026 02:00:46 +0000 (0:00:00.996) 0:00:10.871 ******** 2026-03-26 02:00:48.027515 | orchestrator | skipping: [testbed-manager] 2026-03-26 02:00:48.027527 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:00:48.027537 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:00:48.027548 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:00:48.027559 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:00:48.027581 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:00:48.027592 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:00:48.027603 | orchestrator | 2026-03-26 02:00:48.027613 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2026-03-26 02:00:48.027624 | orchestrator | Thursday 26 March 2026 02:00:47 +0000 (0:00:00.561) 0:00:11.433 ******** 2026-03-26 02:00:48.027635 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:00:48.027646 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:00:48.027657 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:00:48.027668 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:00:48.027678 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:00:48.027689 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:00:48.027700 | orchestrator | ok: [testbed-manager] 2026-03-26 02:00:48.027711 | orchestrator | 2026-03-26 02:00:48.027729 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-03-26 02:00:48.027749 | orchestrator | Thursday 26 March 2026 02:00:47 +0000 (0:00:00.457) 0:00:11.890 ******** 2026-03-26 02:00:48.027767 | orchestrator | skipping: [testbed-manager] 2026-03-26 02:00:48.027785 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:00:48.027813 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:01:00.398550 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:01:00.398693 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:01:00.398711 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:01:00.398722 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:01:00.398732 | orchestrator | 2026-03-26 02:01:00.398743 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-03-26 02:01:00.398755 | orchestrator | Thursday 26 March 2026 02:00:48 +0000 (0:00:00.243) 0:00:12.134 ******** 2026-03-26 02:01:00.398767 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 02:01:00.398795 | orchestrator | 2026-03-26 02:01:00.398805 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-03-26 02:01:00.398816 | orchestrator | Thursday 26 March 2026 02:00:48 +0000 (0:00:00.303) 0:00:12.437 ******** 2026-03-26 02:01:00.398826 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 02:01:00.398836 | orchestrator | 2026-03-26 02:01:00.398846 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-03-26 02:01:00.398856 | orchestrator | Thursday 26 March 2026 02:00:48 +0000 (0:00:00.326) 0:00:12.764 ******** 2026-03-26 02:01:00.398866 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:01:00.398954 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:01:00.398964 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:01:00.398974 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:01:00.398984 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:01:00.398994 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:01:00.399004 | orchestrator | ok: [testbed-manager] 2026-03-26 02:01:00.399014 | orchestrator | 2026-03-26 02:01:00.399024 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-03-26 02:01:00.399034 | orchestrator | Thursday 26 March 2026 02:00:50 +0000 (0:00:01.380) 0:00:14.144 ******** 2026-03-26 02:01:00.399068 | orchestrator | skipping: [testbed-manager] 2026-03-26 02:01:00.399080 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:01:00.399091 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:01:00.399103 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:01:00.399115 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:01:00.399127 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:01:00.399139 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:01:00.399156 | orchestrator | 2026-03-26 02:01:00.399174 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-03-26 02:01:00.399191 | orchestrator | Thursday 26 March 2026 02:00:50 +0000 (0:00:00.353) 0:00:14.497 ******** 2026-03-26 02:01:00.399205 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:01:00.399216 | orchestrator | ok: [testbed-manager] 2026-03-26 02:01:00.399228 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:01:00.399239 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:01:00.399250 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:01:00.399262 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:01:00.399273 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:01:00.399285 | orchestrator | 2026-03-26 02:01:00.399298 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-03-26 02:01:00.399314 | orchestrator | Thursday 26 March 2026 02:00:51 +0000 (0:00:00.559) 0:00:15.057 ******** 2026-03-26 02:01:00.399331 | orchestrator | skipping: [testbed-manager] 2026-03-26 02:01:00.399348 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:01:00.399363 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:01:00.399379 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:01:00.399395 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:01:00.399410 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:01:00.399425 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:01:00.399440 | orchestrator | 2026-03-26 02:01:00.399456 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-03-26 02:01:00.399472 | orchestrator | Thursday 26 March 2026 02:00:51 +0000 (0:00:00.281) 0:00:15.339 ******** 2026-03-26 02:01:00.399488 | orchestrator | ok: [testbed-manager] 2026-03-26 02:01:00.399503 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:01:00.399518 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:01:00.399533 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:01:00.399547 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:01:00.399562 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:01:00.399592 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:01:00.399609 | orchestrator | 2026-03-26 02:01:00.399625 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-03-26 02:01:00.399642 | orchestrator | Thursday 26 March 2026 02:00:51 +0000 (0:00:00.557) 0:00:15.896 ******** 2026-03-26 02:01:00.399657 | orchestrator | ok: [testbed-manager] 2026-03-26 02:01:00.399673 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:01:00.399689 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:01:00.399706 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:01:00.399723 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:01:00.399738 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:01:00.399748 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:01:00.399758 | orchestrator | 2026-03-26 02:01:00.399767 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-03-26 02:01:00.399777 | orchestrator | Thursday 26 March 2026 02:00:53 +0000 (0:00:01.134) 0:00:17.030 ******** 2026-03-26 02:01:00.399787 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:01:00.399797 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:01:00.399806 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:01:00.399815 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:01:00.399825 | orchestrator | ok: [testbed-manager] 2026-03-26 02:01:00.399834 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:01:00.399844 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:01:00.399858 | orchestrator | 2026-03-26 02:01:00.399905 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-03-26 02:01:00.399939 | orchestrator | Thursday 26 March 2026 02:00:54 +0000 (0:00:01.025) 0:00:18.056 ******** 2026-03-26 02:01:00.399984 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 02:01:00.400004 | orchestrator | 2026-03-26 02:01:00.400020 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-03-26 02:01:00.400037 | orchestrator | Thursday 26 March 2026 02:00:54 +0000 (0:00:00.329) 0:00:18.385 ******** 2026-03-26 02:01:00.400052 | orchestrator | skipping: [testbed-manager] 2026-03-26 02:01:00.400069 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:01:00.400086 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:01:00.400101 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:01:00.400119 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:01:00.400131 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:01:00.400140 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:01:00.400149 | orchestrator | 2026-03-26 02:01:00.400159 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-03-26 02:01:00.400169 | orchestrator | Thursday 26 March 2026 02:00:55 +0000 (0:00:01.274) 0:00:19.660 ******** 2026-03-26 02:01:00.400178 | orchestrator | ok: [testbed-manager] 2026-03-26 02:01:00.400188 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:01:00.400198 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:01:00.400207 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:01:00.400217 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:01:00.400226 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:01:00.400236 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:01:00.400245 | orchestrator | 2026-03-26 02:01:00.400255 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-03-26 02:01:00.400265 | orchestrator | Thursday 26 March 2026 02:00:55 +0000 (0:00:00.243) 0:00:19.903 ******** 2026-03-26 02:01:00.400274 | orchestrator | ok: [testbed-manager] 2026-03-26 02:01:00.400284 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:01:00.400293 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:01:00.400302 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:01:00.400312 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:01:00.400321 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:01:00.400330 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:01:00.400340 | orchestrator | 2026-03-26 02:01:00.400349 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-03-26 02:01:00.400359 | orchestrator | Thursday 26 March 2026 02:00:56 +0000 (0:00:00.225) 0:00:20.129 ******** 2026-03-26 02:01:00.400369 | orchestrator | ok: [testbed-manager] 2026-03-26 02:01:00.400378 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:01:00.400388 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:01:00.400397 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:01:00.400406 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:01:00.400416 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:01:00.400425 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:01:00.400434 | orchestrator | 2026-03-26 02:01:00.400444 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-03-26 02:01:00.400454 | orchestrator | Thursday 26 March 2026 02:00:56 +0000 (0:00:00.274) 0:00:20.403 ******** 2026-03-26 02:01:00.400464 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 02:01:00.400476 | orchestrator | 2026-03-26 02:01:00.400485 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-03-26 02:01:00.400502 | orchestrator | Thursday 26 March 2026 02:00:56 +0000 (0:00:00.345) 0:00:20.749 ******** 2026-03-26 02:01:00.400518 | orchestrator | ok: [testbed-manager] 2026-03-26 02:01:00.400534 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:01:00.400560 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:01:00.400575 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:01:00.400589 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:01:00.400606 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:01:00.400621 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:01:00.400638 | orchestrator | 2026-03-26 02:01:00.400655 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-03-26 02:01:00.400671 | orchestrator | Thursday 26 March 2026 02:00:57 +0000 (0:00:00.521) 0:00:21.270 ******** 2026-03-26 02:01:00.400688 | orchestrator | skipping: [testbed-manager] 2026-03-26 02:01:00.400699 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:01:00.400709 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:01:00.400718 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:01:00.400728 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:01:00.400737 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:01:00.400747 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:01:00.400756 | orchestrator | 2026-03-26 02:01:00.400766 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-03-26 02:01:00.400776 | orchestrator | Thursday 26 March 2026 02:00:57 +0000 (0:00:00.289) 0:00:21.560 ******** 2026-03-26 02:01:00.400786 | orchestrator | ok: [testbed-manager] 2026-03-26 02:01:00.400795 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:01:00.400805 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:01:00.400814 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:01:00.400824 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:01:00.400833 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:01:00.400843 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:01:00.400852 | orchestrator | 2026-03-26 02:01:00.400862 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-03-26 02:01:00.400899 | orchestrator | Thursday 26 March 2026 02:00:58 +0000 (0:00:01.103) 0:00:22.663 ******** 2026-03-26 02:01:00.400910 | orchestrator | ok: [testbed-manager] 2026-03-26 02:01:00.400920 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:01:00.400930 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:01:00.400940 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:01:00.400950 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:01:00.400971 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:01:00.400981 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:01:00.400991 | orchestrator | 2026-03-26 02:01:00.401001 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-03-26 02:01:00.401011 | orchestrator | Thursday 26 March 2026 02:00:59 +0000 (0:00:00.571) 0:00:23.235 ******** 2026-03-26 02:01:00.401021 | orchestrator | ok: [testbed-manager] 2026-03-26 02:01:00.401031 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:01:00.401040 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:01:00.401050 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:01:00.401070 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:01:42.452185 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:01:42.452327 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:01:42.452347 | orchestrator | 2026-03-26 02:01:42.452365 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-03-26 02:01:42.452382 | orchestrator | Thursday 26 March 2026 02:01:00 +0000 (0:00:01.168) 0:00:24.404 ******** 2026-03-26 02:01:42.452396 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:01:42.452411 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:01:42.452426 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:01:42.452441 | orchestrator | changed: [testbed-manager] 2026-03-26 02:01:42.452458 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:01:42.452474 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:01:42.452490 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:01:42.452506 | orchestrator | 2026-03-26 02:01:42.452521 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2026-03-26 02:01:42.452538 | orchestrator | Thursday 26 March 2026 02:01:16 +0000 (0:00:16.032) 0:00:40.436 ******** 2026-03-26 02:01:42.452554 | orchestrator | ok: [testbed-manager] 2026-03-26 02:01:42.452601 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:01:42.452617 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:01:42.452632 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:01:42.452646 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:01:42.452662 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:01:42.452678 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:01:42.452696 | orchestrator | 2026-03-26 02:01:42.452714 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2026-03-26 02:01:42.452732 | orchestrator | Thursday 26 March 2026 02:01:16 +0000 (0:00:00.214) 0:00:40.651 ******** 2026-03-26 02:01:42.452750 | orchestrator | ok: [testbed-manager] 2026-03-26 02:01:42.452768 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:01:42.452786 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:01:42.452804 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:01:42.452821 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:01:42.452839 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:01:42.452854 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:01:42.452870 | orchestrator | 2026-03-26 02:01:42.452885 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2026-03-26 02:01:42.452932 | orchestrator | Thursday 26 March 2026 02:01:16 +0000 (0:00:00.239) 0:00:40.890 ******** 2026-03-26 02:01:42.452947 | orchestrator | ok: [testbed-manager] 2026-03-26 02:01:42.452964 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:01:42.452976 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:01:42.452991 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:01:42.453008 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:01:42.453024 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:01:42.453041 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:01:42.453057 | orchestrator | 2026-03-26 02:01:42.453073 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2026-03-26 02:01:42.453087 | orchestrator | Thursday 26 March 2026 02:01:17 +0000 (0:00:00.239) 0:00:41.130 ******** 2026-03-26 02:01:42.453105 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 02:01:42.453122 | orchestrator | 2026-03-26 02:01:42.453136 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2026-03-26 02:01:42.453151 | orchestrator | Thursday 26 March 2026 02:01:17 +0000 (0:00:00.333) 0:00:41.463 ******** 2026-03-26 02:01:42.453166 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:01:42.453180 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:01:42.453196 | orchestrator | ok: [testbed-manager] 2026-03-26 02:01:42.453211 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:01:42.453225 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:01:42.453239 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:01:42.453251 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:01:42.453264 | orchestrator | 2026-03-26 02:01:42.453276 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2026-03-26 02:01:42.453291 | orchestrator | Thursday 26 March 2026 02:01:19 +0000 (0:00:01.723) 0:00:43.187 ******** 2026-03-26 02:01:42.453306 | orchestrator | changed: [testbed-manager] 2026-03-26 02:01:42.453320 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:01:42.453335 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:01:42.453350 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:01:42.453363 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:01:42.453372 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:01:42.453381 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:01:42.453389 | orchestrator | 2026-03-26 02:01:42.453398 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2026-03-26 02:01:42.453425 | orchestrator | Thursday 26 March 2026 02:01:20 +0000 (0:00:01.058) 0:00:44.246 ******** 2026-03-26 02:01:42.453434 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:01:42.453443 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:01:42.453451 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:01:42.453474 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:01:42.453483 | orchestrator | ok: [testbed-manager] 2026-03-26 02:01:42.453491 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:01:42.453500 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:01:42.453508 | orchestrator | 2026-03-26 02:01:42.453517 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2026-03-26 02:01:42.453527 | orchestrator | Thursday 26 March 2026 02:01:21 +0000 (0:00:00.824) 0:00:45.070 ******** 2026-03-26 02:01:42.453537 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 02:01:42.453548 | orchestrator | 2026-03-26 02:01:42.453556 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2026-03-26 02:01:42.453566 | orchestrator | Thursday 26 March 2026 02:01:21 +0000 (0:00:00.347) 0:00:45.418 ******** 2026-03-26 02:01:42.453575 | orchestrator | changed: [testbed-manager] 2026-03-26 02:01:42.453584 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:01:42.453592 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:01:42.453601 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:01:42.453610 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:01:42.453619 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:01:42.453627 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:01:42.453636 | orchestrator | 2026-03-26 02:01:42.453668 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2026-03-26 02:01:42.453677 | orchestrator | Thursday 26 March 2026 02:01:22 +0000 (0:00:01.002) 0:00:46.420 ******** 2026-03-26 02:01:42.453687 | orchestrator | skipping: [testbed-manager] 2026-03-26 02:01:42.453695 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:01:42.453704 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:01:42.453713 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:01:42.453722 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:01:42.453730 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:01:42.453739 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:01:42.453747 | orchestrator | 2026-03-26 02:01:42.453756 | orchestrator | TASK [osism.services.rsyslog : Include logrotate tasks] ************************ 2026-03-26 02:01:42.453765 | orchestrator | Thursday 26 March 2026 02:01:22 +0000 (0:00:00.255) 0:00:46.675 ******** 2026-03-26 02:01:42.453774 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/logrotate.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 02:01:42.453783 | orchestrator | 2026-03-26 02:01:42.453792 | orchestrator | TASK [osism.services.rsyslog : Ensure logrotate package is installed] ********** 2026-03-26 02:01:42.453800 | orchestrator | Thursday 26 March 2026 02:01:22 +0000 (0:00:00.330) 0:00:47.006 ******** 2026-03-26 02:01:42.453809 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:01:42.453818 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:01:42.453826 | orchestrator | ok: [testbed-manager] 2026-03-26 02:01:42.453835 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:01:42.453843 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:01:42.453852 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:01:42.453861 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:01:42.453869 | orchestrator | 2026-03-26 02:01:42.453878 | orchestrator | TASK [osism.services.rsyslog : Configure logrotate for rsyslog] **************** 2026-03-26 02:01:42.453887 | orchestrator | Thursday 26 March 2026 02:01:24 +0000 (0:00:01.779) 0:00:48.786 ******** 2026-03-26 02:01:42.453947 | orchestrator | changed: [testbed-manager] 2026-03-26 02:01:42.453959 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:01:42.453968 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:01:42.453977 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:01:42.453985 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:01:42.453994 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:01:42.454003 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:01:42.454083 | orchestrator | 2026-03-26 02:01:42.454094 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2026-03-26 02:01:42.454103 | orchestrator | Thursday 26 March 2026 02:01:25 +0000 (0:00:01.139) 0:00:49.925 ******** 2026-03-26 02:01:42.454112 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:01:42.454121 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:01:42.454129 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:01:42.454160 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:01:42.454169 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:01:42.454177 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:01:42.454186 | orchestrator | changed: [testbed-manager] 2026-03-26 02:01:42.454194 | orchestrator | 2026-03-26 02:01:42.454203 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2026-03-26 02:01:42.454212 | orchestrator | Thursday 26 March 2026 02:01:39 +0000 (0:00:13.497) 0:01:03.423 ******** 2026-03-26 02:01:42.454220 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:01:42.454229 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:01:42.454237 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:01:42.454246 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:01:42.454254 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:01:42.454263 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:01:42.454271 | orchestrator | ok: [testbed-manager] 2026-03-26 02:01:42.454280 | orchestrator | 2026-03-26 02:01:42.454288 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2026-03-26 02:01:42.454297 | orchestrator | Thursday 26 March 2026 02:01:40 +0000 (0:00:01.248) 0:01:04.671 ******** 2026-03-26 02:01:42.454306 | orchestrator | ok: [testbed-manager] 2026-03-26 02:01:42.454314 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:01:42.454322 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:01:42.454331 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:01:42.454340 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:01:42.454348 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:01:42.454356 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:01:42.454365 | orchestrator | 2026-03-26 02:01:42.454373 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2026-03-26 02:01:42.454382 | orchestrator | Thursday 26 March 2026 02:01:41 +0000 (0:00:00.945) 0:01:05.617 ******** 2026-03-26 02:01:42.454397 | orchestrator | ok: [testbed-manager] 2026-03-26 02:01:42.454406 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:01:42.454415 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:01:42.454423 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:01:42.454431 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:01:42.454440 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:01:42.454448 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:01:42.454457 | orchestrator | 2026-03-26 02:01:42.454465 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2026-03-26 02:01:42.454474 | orchestrator | Thursday 26 March 2026 02:01:41 +0000 (0:00:00.272) 0:01:05.889 ******** 2026-03-26 02:01:42.454483 | orchestrator | ok: [testbed-manager] 2026-03-26 02:01:42.454492 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:01:42.454500 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:01:42.454508 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:01:42.454517 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:01:42.454525 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:01:42.454533 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:01:42.454542 | orchestrator | 2026-03-26 02:01:42.454551 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2026-03-26 02:01:42.454559 | orchestrator | Thursday 26 March 2026 02:01:42 +0000 (0:00:00.253) 0:01:06.143 ******** 2026-03-26 02:01:42.454568 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 02:01:42.454578 | orchestrator | 2026-03-26 02:01:42.454594 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2026-03-26 02:04:08.736219 | orchestrator | Thursday 26 March 2026 02:01:42 +0000 (0:00:00.316) 0:01:06.459 ******** 2026-03-26 02:04:08.736395 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:04:08.736416 | orchestrator | ok: [testbed-manager] 2026-03-26 02:04:08.736428 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:04:08.736440 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:04:08.736451 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:04:08.736462 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:04:08.736473 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:04:08.736484 | orchestrator | 2026-03-26 02:04:08.736496 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2026-03-26 02:04:08.736507 | orchestrator | Thursday 26 March 2026 02:01:44 +0000 (0:00:01.604) 0:01:08.063 ******** 2026-03-26 02:04:08.736518 | orchestrator | changed: [testbed-manager] 2026-03-26 02:04:08.736531 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:04:08.736541 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:04:08.736552 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:04:08.736563 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:04:08.736574 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:04:08.736585 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:04:08.736595 | orchestrator | 2026-03-26 02:04:08.736606 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2026-03-26 02:04:08.736618 | orchestrator | Thursday 26 March 2026 02:01:44 +0000 (0:00:00.565) 0:01:08.628 ******** 2026-03-26 02:04:08.736629 | orchestrator | ok: [testbed-manager] 2026-03-26 02:04:08.736640 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:04:08.736650 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:04:08.736661 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:04:08.736672 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:04:08.736683 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:04:08.736693 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:04:08.736704 | orchestrator | 2026-03-26 02:04:08.736717 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2026-03-26 02:04:08.736732 | orchestrator | Thursday 26 March 2026 02:01:44 +0000 (0:00:00.241) 0:01:08.870 ******** 2026-03-26 02:04:08.736745 | orchestrator | ok: [testbed-manager] 2026-03-26 02:04:08.736757 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:04:08.736771 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:04:08.736784 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:04:08.736797 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:04:08.736810 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:04:08.736823 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:04:08.736835 | orchestrator | 2026-03-26 02:04:08.736849 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2026-03-26 02:04:08.736862 | orchestrator | Thursday 26 March 2026 02:01:45 +0000 (0:00:01.143) 0:01:10.013 ******** 2026-03-26 02:04:08.736875 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:04:08.736888 | orchestrator | changed: [testbed-manager] 2026-03-26 02:04:08.736901 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:04:08.736914 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:04:08.736927 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:04:08.736966 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:04:08.737006 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:04:08.737020 | orchestrator | 2026-03-26 02:04:08.737038 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2026-03-26 02:04:08.737051 | orchestrator | Thursday 26 March 2026 02:01:47 +0000 (0:00:01.661) 0:01:11.674 ******** 2026-03-26 02:04:08.737065 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:04:08.737078 | orchestrator | ok: [testbed-manager] 2026-03-26 02:04:08.737089 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:04:08.737100 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:04:08.737111 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:04:08.737122 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:04:08.737132 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:04:08.737143 | orchestrator | 2026-03-26 02:04:08.737154 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2026-03-26 02:04:08.737201 | orchestrator | Thursday 26 March 2026 02:01:50 +0000 (0:00:03.133) 0:01:14.808 ******** 2026-03-26 02:04:08.737213 | orchestrator | ok: [testbed-manager] 2026-03-26 02:04:08.737224 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:04:08.737235 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:04:08.737245 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:04:08.737256 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:04:08.737267 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:04:08.737277 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:04:08.737288 | orchestrator | 2026-03-26 02:04:08.737299 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2026-03-26 02:04:08.737310 | orchestrator | Thursday 26 March 2026 02:02:35 +0000 (0:00:44.819) 0:01:59.627 ******** 2026-03-26 02:04:08.737321 | orchestrator | changed: [testbed-manager] 2026-03-26 02:04:08.737332 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:04:08.737343 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:04:08.737354 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:04:08.737365 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:04:08.737375 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:04:08.737386 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:04:08.737397 | orchestrator | 2026-03-26 02:04:08.737408 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2026-03-26 02:04:08.737419 | orchestrator | Thursday 26 March 2026 02:03:50 +0000 (0:01:14.819) 0:03:14.446 ******** 2026-03-26 02:04:08.737430 | orchestrator | ok: [testbed-manager] 2026-03-26 02:04:08.737441 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:04:08.737452 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:04:08.737463 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:04:08.737474 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:04:08.737485 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:04:08.737495 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:04:08.737506 | orchestrator | 2026-03-26 02:04:08.737517 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2026-03-26 02:04:08.737528 | orchestrator | Thursday 26 March 2026 02:03:52 +0000 (0:00:01.685) 0:03:16.132 ******** 2026-03-26 02:04:08.737539 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:04:08.737550 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:04:08.737561 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:04:08.737572 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:04:08.737582 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:04:08.737593 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:04:08.737604 | orchestrator | changed: [testbed-manager] 2026-03-26 02:04:08.737615 | orchestrator | 2026-03-26 02:04:08.737626 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2026-03-26 02:04:08.737637 | orchestrator | Thursday 26 March 2026 02:04:07 +0000 (0:00:15.146) 0:03:31.279 ******** 2026-03-26 02:04:08.737688 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2026-03-26 02:04:08.737721 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2026-03-26 02:04:08.737746 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2026-03-26 02:04:08.737759 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-03-26 02:04:08.737770 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'network', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-03-26 02:04:08.737781 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2026-03-26 02:04:08.737793 | orchestrator | 2026-03-26 02:04:08.737804 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2026-03-26 02:04:08.737815 | orchestrator | Thursday 26 March 2026 02:04:07 +0000 (0:00:00.462) 0:03:31.741 ******** 2026-03-26 02:04:08.737826 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-26 02:04:08.737837 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-26 02:04:08.737848 | orchestrator | skipping: [testbed-manager] 2026-03-26 02:04:08.737859 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-26 02:04:08.737869 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:04:08.737885 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-26 02:04:08.737896 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:04:08.737907 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:04:08.737918 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-26 02:04:08.737929 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-26 02:04:08.737940 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-26 02:04:08.737951 | orchestrator | 2026-03-26 02:04:08.737962 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2026-03-26 02:04:08.737972 | orchestrator | Thursday 26 March 2026 02:04:08 +0000 (0:00:00.829) 0:03:32.571 ******** 2026-03-26 02:04:08.738012 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-26 02:04:08.738108 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-26 02:04:08.738120 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-26 02:04:08.738131 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-26 02:04:08.738141 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-26 02:04:08.738162 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-26 02:04:14.503674 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-26 02:04:14.503776 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-26 02:04:14.503809 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-26 02:04:14.503819 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-26 02:04:14.503828 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-26 02:04:14.503837 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-26 02:04:14.503846 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-26 02:04:14.503855 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-26 02:04:14.503863 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-26 02:04:14.503871 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-26 02:04:14.503877 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-26 02:04:14.503882 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-26 02:04:14.503887 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-26 02:04:14.503892 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-26 02:04:14.503897 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-26 02:04:14.503902 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-26 02:04:14.503907 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-26 02:04:14.503912 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-26 02:04:14.503918 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-26 02:04:14.503923 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-26 02:04:14.503928 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-26 02:04:14.503933 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-26 02:04:14.503938 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-26 02:04:14.503943 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-26 02:04:14.503948 | orchestrator | skipping: [testbed-manager] 2026-03-26 02:04:14.503954 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:04:14.503960 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-26 02:04:14.503965 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-26 02:04:14.503970 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-26 02:04:14.503975 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-26 02:04:14.504008 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-26 02:04:14.504015 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-26 02:04:14.504020 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-26 02:04:14.504025 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-26 02:04:14.504030 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-26 02:04:14.504040 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-26 02:04:14.504046 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:04:14.504051 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:04:14.504056 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-03-26 02:04:14.504061 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-03-26 02:04:14.504066 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-03-26 02:04:14.504071 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-03-26 02:04:14.504076 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-03-26 02:04:14.504093 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-03-26 02:04:14.504098 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-03-26 02:04:14.504103 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-03-26 02:04:14.504108 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-03-26 02:04:14.504113 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-03-26 02:04:14.504118 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-03-26 02:04:14.504123 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-03-26 02:04:14.504128 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-03-26 02:04:14.504133 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-03-26 02:04:14.504138 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-03-26 02:04:14.504143 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-03-26 02:04:14.504148 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-03-26 02:04:14.504154 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-03-26 02:04:14.504159 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-03-26 02:04:14.504164 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-03-26 02:04:14.504169 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-03-26 02:04:14.504174 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-03-26 02:04:14.504179 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-03-26 02:04:14.504184 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-03-26 02:04:14.504189 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-03-26 02:04:14.504194 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-03-26 02:04:14.504199 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-03-26 02:04:14.504204 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-03-26 02:04:14.504209 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-03-26 02:04:14.504215 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-03-26 02:04:14.504223 | orchestrator | 2026-03-26 02:04:14.504229 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2026-03-26 02:04:14.504234 | orchestrator | Thursday 26 March 2026 02:04:13 +0000 (0:00:04.830) 0:03:37.401 ******** 2026-03-26 02:04:14.504240 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-26 02:04:14.504246 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-26 02:04:14.504252 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-26 02:04:14.504258 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-26 02:04:14.504266 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-26 02:04:14.504272 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-26 02:04:14.504279 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-26 02:04:14.504284 | orchestrator | 2026-03-26 02:04:14.504291 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2026-03-26 02:04:14.504296 | orchestrator | Thursday 26 March 2026 02:04:13 +0000 (0:00:00.583) 0:03:37.985 ******** 2026-03-26 02:04:14.504302 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-26 02:04:14.504308 | orchestrator | skipping: [testbed-manager] 2026-03-26 02:04:14.504314 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-26 02:04:14.504320 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-26 02:04:14.504326 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:04:14.504332 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:04:14.504339 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-26 02:04:14.504345 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:04:14.504351 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-26 02:04:14.504357 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-26 02:04:14.504366 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-26 02:04:28.920135 | orchestrator | 2026-03-26 02:04:28.920257 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on network] ***************** 2026-03-26 02:04:28.920274 | orchestrator | Thursday 26 March 2026 02:04:14 +0000 (0:00:00.524) 0:03:38.509 ******** 2026-03-26 02:04:28.920287 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-26 02:04:28.920299 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-26 02:04:28.920310 | orchestrator | skipping: [testbed-manager] 2026-03-26 02:04:28.920322 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:04:28.920333 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-26 02:04:28.920344 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-26 02:04:28.920355 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:04:28.920366 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:04:28.920377 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-26 02:04:28.920388 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-26 02:04:28.920399 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-26 02:04:28.920409 | orchestrator | 2026-03-26 02:04:28.920420 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2026-03-26 02:04:28.920458 | orchestrator | Thursday 26 March 2026 02:04:15 +0000 (0:00:00.605) 0:03:39.115 ******** 2026-03-26 02:04:28.920469 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-26 02:04:28.920481 | orchestrator | skipping: [testbed-manager] 2026-03-26 02:04:28.920491 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-26 02:04:28.920502 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-26 02:04:28.920513 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:04:28.920531 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:04:28.920550 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-26 02:04:28.920569 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:04:28.920586 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-03-26 02:04:28.920605 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-03-26 02:04:28.920625 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-03-26 02:04:28.920645 | orchestrator | 2026-03-26 02:04:28.920665 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2026-03-26 02:04:28.920680 | orchestrator | Thursday 26 March 2026 02:04:15 +0000 (0:00:00.611) 0:03:39.727 ******** 2026-03-26 02:04:28.920694 | orchestrator | skipping: [testbed-manager] 2026-03-26 02:04:28.920707 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:04:28.920719 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:04:28.920732 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:04:28.920744 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:04:28.920757 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:04:28.920771 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:04:28.920783 | orchestrator | 2026-03-26 02:04:28.920796 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2026-03-26 02:04:28.920809 | orchestrator | Thursday 26 March 2026 02:04:16 +0000 (0:00:00.365) 0:03:40.092 ******** 2026-03-26 02:04:28.920822 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:04:28.920835 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:04:28.920851 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:04:28.920871 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:04:28.920889 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:04:28.920907 | orchestrator | ok: [testbed-manager] 2026-03-26 02:04:28.920925 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:04:28.920944 | orchestrator | 2026-03-26 02:04:28.920961 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2026-03-26 02:04:28.920979 | orchestrator | Thursday 26 March 2026 02:04:22 +0000 (0:00:06.599) 0:03:46.692 ******** 2026-03-26 02:04:28.921025 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2026-03-26 02:04:28.921044 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2026-03-26 02:04:28.921062 | orchestrator | skipping: [testbed-manager] 2026-03-26 02:04:28.921080 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2026-03-26 02:04:28.921100 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:04:28.921118 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2026-03-26 02:04:28.921132 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:04:28.921145 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2026-03-26 02:04:28.921166 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:04:28.921184 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2026-03-26 02:04:28.921225 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:04:28.921246 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:04:28.921266 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2026-03-26 02:04:28.921284 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:04:28.921303 | orchestrator | 2026-03-26 02:04:28.921335 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2026-03-26 02:04:28.921354 | orchestrator | Thursday 26 March 2026 02:04:23 +0000 (0:00:00.338) 0:03:47.031 ******** 2026-03-26 02:04:28.921373 | orchestrator | ok: [testbed-node-3] => (item=cron) 2026-03-26 02:04:28.921392 | orchestrator | ok: [testbed-manager] => (item=cron) 2026-03-26 02:04:28.921410 | orchestrator | ok: [testbed-node-4] => (item=cron) 2026-03-26 02:04:28.921454 | orchestrator | ok: [testbed-node-5] => (item=cron) 2026-03-26 02:04:28.921473 | orchestrator | ok: [testbed-node-0] => (item=cron) 2026-03-26 02:04:28.921492 | orchestrator | ok: [testbed-node-1] => (item=cron) 2026-03-26 02:04:28.921510 | orchestrator | ok: [testbed-node-2] => (item=cron) 2026-03-26 02:04:28.921529 | orchestrator | 2026-03-26 02:04:28.921548 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2026-03-26 02:04:28.921567 | orchestrator | Thursday 26 March 2026 02:04:24 +0000 (0:00:01.163) 0:03:48.195 ******** 2026-03-26 02:04:28.921587 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 02:04:28.921609 | orchestrator | 2026-03-26 02:04:28.921628 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2026-03-26 02:04:28.921648 | orchestrator | Thursday 26 March 2026 02:04:24 +0000 (0:00:00.448) 0:03:48.643 ******** 2026-03-26 02:04:28.921666 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:04:28.921684 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:04:28.921703 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:04:28.921722 | orchestrator | ok: [testbed-manager] 2026-03-26 02:04:28.921740 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:04:28.921758 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:04:28.921775 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:04:28.921793 | orchestrator | 2026-03-26 02:04:28.921812 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2026-03-26 02:04:28.921832 | orchestrator | Thursday 26 March 2026 02:04:26 +0000 (0:00:01.429) 0:03:50.072 ******** 2026-03-26 02:04:28.921851 | orchestrator | ok: [testbed-manager] 2026-03-26 02:04:28.921868 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:04:28.921886 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:04:28.921904 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:04:28.921923 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:04:28.921942 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:04:28.921960 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:04:28.921979 | orchestrator | 2026-03-26 02:04:28.922134 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2026-03-26 02:04:28.922155 | orchestrator | Thursday 26 March 2026 02:04:26 +0000 (0:00:00.660) 0:03:50.732 ******** 2026-03-26 02:04:28.922174 | orchestrator | changed: [testbed-manager] 2026-03-26 02:04:28.922193 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:04:28.922212 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:04:28.922231 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:04:28.922249 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:04:28.922268 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:04:28.922287 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:04:28.922305 | orchestrator | 2026-03-26 02:04:28.922316 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2026-03-26 02:04:28.922327 | orchestrator | Thursday 26 March 2026 02:04:27 +0000 (0:00:00.640) 0:03:51.373 ******** 2026-03-26 02:04:28.922338 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:04:28.922349 | orchestrator | ok: [testbed-manager] 2026-03-26 02:04:28.922360 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:04:28.922370 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:04:28.922381 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:04:28.922391 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:04:28.922402 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:04:28.922412 | orchestrator | 2026-03-26 02:04:28.922423 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2026-03-26 02:04:28.922445 | orchestrator | Thursday 26 March 2026 02:04:27 +0000 (0:00:00.601) 0:03:51.975 ******** 2026-03-26 02:04:28.922468 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1774489204.3152304, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-26 02:04:28.922483 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1774489229.2050931, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-26 02:04:28.922495 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1774489232.927024, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-26 02:04:28.922535 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1774489223.8100085, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-26 02:04:34.029935 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1774489221.6920967, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-26 02:04:34.031103 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1774489235.4309287, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-26 02:04:34.031158 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1774489217.9835355, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-26 02:04:34.031202 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-26 02:04:34.031231 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-26 02:04:34.031245 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-26 02:04:34.031258 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-26 02:04:34.031304 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-26 02:04:34.031318 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-26 02:04:34.031330 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-26 02:04:34.031350 | orchestrator | 2026-03-26 02:04:34.031364 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2026-03-26 02:04:34.031378 | orchestrator | Thursday 26 March 2026 02:04:28 +0000 (0:00:00.949) 0:03:52.925 ******** 2026-03-26 02:04:34.031390 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:04:34.031403 | orchestrator | changed: [testbed-manager] 2026-03-26 02:04:34.031414 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:04:34.031426 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:04:34.031438 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:04:34.031450 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:04:34.031462 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:04:34.031474 | orchestrator | 2026-03-26 02:04:34.031485 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2026-03-26 02:04:34.031497 | orchestrator | Thursday 26 March 2026 02:04:29 +0000 (0:00:01.067) 0:03:53.992 ******** 2026-03-26 02:04:34.031508 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:04:34.031520 | orchestrator | changed: [testbed-manager] 2026-03-26 02:04:34.031532 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:04:34.031545 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:04:34.031557 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:04:34.031570 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:04:34.031582 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:04:34.031594 | orchestrator | 2026-03-26 02:04:34.031613 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2026-03-26 02:04:34.031626 | orchestrator | Thursday 26 March 2026 02:04:31 +0000 (0:00:01.206) 0:03:55.199 ******** 2026-03-26 02:04:34.031637 | orchestrator | changed: [testbed-manager] 2026-03-26 02:04:34.031650 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:04:34.031657 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:04:34.031665 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:04:34.031672 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:04:34.031679 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:04:34.031686 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:04:34.031693 | orchestrator | 2026-03-26 02:04:34.031700 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2026-03-26 02:04:34.031707 | orchestrator | Thursday 26 March 2026 02:04:32 +0000 (0:00:01.160) 0:03:56.359 ******** 2026-03-26 02:04:34.031714 | orchestrator | skipping: [testbed-manager] 2026-03-26 02:04:34.031722 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:04:34.031729 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:04:34.031736 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:04:34.031743 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:04:34.031750 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:04:34.031757 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:04:34.031769 | orchestrator | 2026-03-26 02:04:34.031781 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2026-03-26 02:04:34.031794 | orchestrator | Thursday 26 March 2026 02:04:32 +0000 (0:00:00.362) 0:03:56.722 ******** 2026-03-26 02:04:34.031805 | orchestrator | ok: [testbed-manager] 2026-03-26 02:04:34.031817 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:04:34.031827 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:04:34.031837 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:04:34.031848 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:04:34.031858 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:04:34.031869 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:04:34.031879 | orchestrator | 2026-03-26 02:04:34.031890 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2026-03-26 02:04:34.031901 | orchestrator | Thursday 26 March 2026 02:04:33 +0000 (0:00:00.797) 0:03:57.519 ******** 2026-03-26 02:04:34.031915 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 02:04:34.031938 | orchestrator | 2026-03-26 02:04:34.031949 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2026-03-26 02:04:34.031974 | orchestrator | Thursday 26 March 2026 02:04:34 +0000 (0:00:00.518) 0:03:58.037 ******** 2026-03-26 02:05:52.790866 | orchestrator | ok: [testbed-manager] 2026-03-26 02:05:52.790965 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:05:52.790977 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:05:52.790986 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:05:52.790993 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:05:52.791001 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:05:52.791008 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:05:52.791015 | orchestrator | 2026-03-26 02:05:52.791052 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2026-03-26 02:05:52.791063 | orchestrator | Thursday 26 March 2026 02:04:42 +0000 (0:00:08.264) 0:04:06.302 ******** 2026-03-26 02:05:52.791071 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:05:52.791087 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:05:52.791094 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:05:52.791102 | orchestrator | ok: [testbed-manager] 2026-03-26 02:05:52.791109 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:05:52.791117 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:05:52.791124 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:05:52.791131 | orchestrator | 2026-03-26 02:05:52.791139 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2026-03-26 02:05:52.791146 | orchestrator | Thursday 26 March 2026 02:04:43 +0000 (0:00:01.293) 0:04:07.595 ******** 2026-03-26 02:05:52.791153 | orchestrator | ok: [testbed-manager] 2026-03-26 02:05:52.791161 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:05:52.791168 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:05:52.791175 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:05:52.791182 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:05:52.791189 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:05:52.791196 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:05:52.791203 | orchestrator | 2026-03-26 02:05:52.791210 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2026-03-26 02:05:52.791218 | orchestrator | Thursday 26 March 2026 02:04:44 +0000 (0:00:01.132) 0:04:08.728 ******** 2026-03-26 02:05:52.791225 | orchestrator | ok: [testbed-manager] 2026-03-26 02:05:52.791232 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:05:52.791239 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:05:52.791246 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:05:52.791254 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:05:52.791261 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:05:52.791268 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:05:52.791276 | orchestrator | 2026-03-26 02:05:52.791283 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2026-03-26 02:05:52.791291 | orchestrator | Thursday 26 March 2026 02:04:45 +0000 (0:00:00.334) 0:04:09.063 ******** 2026-03-26 02:05:52.791299 | orchestrator | ok: [testbed-manager] 2026-03-26 02:05:52.791306 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:05:52.791313 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:05:52.791325 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:05:52.791337 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:05:52.791349 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:05:52.791361 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:05:52.791372 | orchestrator | 2026-03-26 02:05:52.791385 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2026-03-26 02:05:52.791395 | orchestrator | Thursday 26 March 2026 02:04:45 +0000 (0:00:00.329) 0:04:09.392 ******** 2026-03-26 02:05:52.791402 | orchestrator | ok: [testbed-manager] 2026-03-26 02:05:52.791409 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:05:52.791417 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:05:52.791453 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:05:52.791467 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:05:52.791479 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:05:52.791489 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:05:52.791500 | orchestrator | 2026-03-26 02:05:52.791510 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2026-03-26 02:05:52.791523 | orchestrator | Thursday 26 March 2026 02:04:45 +0000 (0:00:00.344) 0:04:09.737 ******** 2026-03-26 02:05:52.791536 | orchestrator | ok: [testbed-manager] 2026-03-26 02:05:52.791547 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:05:52.791560 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:05:52.791572 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:05:52.791584 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:05:52.791595 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:05:52.791607 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:05:52.791621 | orchestrator | 2026-03-26 02:05:52.791634 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2026-03-26 02:05:52.791645 | orchestrator | Thursday 26 March 2026 02:04:51 +0000 (0:00:05.484) 0:04:15.221 ******** 2026-03-26 02:05:52.791658 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 02:05:52.791673 | orchestrator | 2026-03-26 02:05:52.791684 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2026-03-26 02:05:52.791696 | orchestrator | Thursday 26 March 2026 02:04:51 +0000 (0:00:00.456) 0:04:15.678 ******** 2026-03-26 02:05:52.791706 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2026-03-26 02:05:52.791717 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2026-03-26 02:05:52.791728 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2026-03-26 02:05:52.791738 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2026-03-26 02:05:52.791749 | orchestrator | skipping: [testbed-manager] 2026-03-26 02:05:52.791779 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2026-03-26 02:05:52.791791 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2026-03-26 02:05:52.791803 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:05:52.791815 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2026-03-26 02:05:52.791827 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:05:52.791839 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2026-03-26 02:05:52.791851 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2026-03-26 02:05:52.791862 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2026-03-26 02:05:52.791874 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:05:52.791886 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2026-03-26 02:05:52.791898 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2026-03-26 02:05:52.791930 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:05:52.791942 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:05:52.791954 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2026-03-26 02:05:52.791967 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2026-03-26 02:05:52.791979 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:05:52.791991 | orchestrator | 2026-03-26 02:05:52.792001 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2026-03-26 02:05:52.792009 | orchestrator | Thursday 26 March 2026 02:04:52 +0000 (0:00:00.432) 0:04:16.111 ******** 2026-03-26 02:05:52.792017 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 02:05:52.792065 | orchestrator | 2026-03-26 02:05:52.792074 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2026-03-26 02:05:52.792092 | orchestrator | Thursday 26 March 2026 02:04:52 +0000 (0:00:00.463) 0:04:16.574 ******** 2026-03-26 02:05:52.792100 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2026-03-26 02:05:52.792107 | orchestrator | skipping: [testbed-manager] 2026-03-26 02:05:52.792115 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2026-03-26 02:05:52.792122 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:05:52.792129 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2026-03-26 02:05:52.792137 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2026-03-26 02:05:52.792144 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:05:52.792151 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2026-03-26 02:05:52.792158 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:05:52.792165 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2026-03-26 02:05:52.792172 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:05:52.792179 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:05:52.792186 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2026-03-26 02:05:52.792194 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:05:52.792201 | orchestrator | 2026-03-26 02:05:52.792208 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2026-03-26 02:05:52.792216 | orchestrator | Thursday 26 March 2026 02:04:52 +0000 (0:00:00.391) 0:04:16.965 ******** 2026-03-26 02:05:52.792223 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 02:05:52.792230 | orchestrator | 2026-03-26 02:05:52.792237 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2026-03-26 02:05:52.792244 | orchestrator | Thursday 26 March 2026 02:04:53 +0000 (0:00:00.463) 0:04:17.429 ******** 2026-03-26 02:05:52.792252 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:05:52.792259 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:05:52.792266 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:05:52.792273 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:05:52.792286 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:05:52.792294 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:05:52.792301 | orchestrator | changed: [testbed-manager] 2026-03-26 02:05:52.792308 | orchestrator | 2026-03-26 02:05:52.792317 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2026-03-26 02:05:52.792329 | orchestrator | Thursday 26 March 2026 02:05:28 +0000 (0:00:35.079) 0:04:52.509 ******** 2026-03-26 02:05:52.792341 | orchestrator | changed: [testbed-manager] 2026-03-26 02:05:52.792353 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:05:52.792365 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:05:52.792377 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:05:52.792390 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:05:52.792402 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:05:52.792413 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:05:52.792421 | orchestrator | 2026-03-26 02:05:52.792428 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2026-03-26 02:05:52.792435 | orchestrator | Thursday 26 March 2026 02:05:36 +0000 (0:00:08.100) 0:05:00.609 ******** 2026-03-26 02:05:52.792442 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:05:52.792449 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:05:52.792457 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:05:52.792463 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:05:52.792470 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:05:52.792477 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:05:52.792484 | orchestrator | changed: [testbed-manager] 2026-03-26 02:05:52.792491 | orchestrator | 2026-03-26 02:05:52.792499 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2026-03-26 02:05:52.792512 | orchestrator | Thursday 26 March 2026 02:05:44 +0000 (0:00:08.292) 0:05:08.902 ******** 2026-03-26 02:05:52.792519 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:05:52.792527 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:05:52.792534 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:05:52.792541 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:05:52.792548 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:05:52.792555 | orchestrator | ok: [testbed-manager] 2026-03-26 02:05:52.792562 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:05:52.792569 | orchestrator | 2026-03-26 02:05:52.792576 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2026-03-26 02:05:52.792583 | orchestrator | Thursday 26 March 2026 02:05:46 +0000 (0:00:01.785) 0:05:10.688 ******** 2026-03-26 02:05:52.792590 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:05:52.792597 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:05:52.792604 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:05:52.792611 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:05:52.792618 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:05:52.792626 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:05:52.792633 | orchestrator | changed: [testbed-manager] 2026-03-26 02:05:52.792640 | orchestrator | 2026-03-26 02:05:52.792655 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2026-03-26 02:06:05.031810 | orchestrator | Thursday 26 March 2026 02:05:52 +0000 (0:00:06.100) 0:05:16.788 ******** 2026-03-26 02:06:05.031898 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 02:06:05.031909 | orchestrator | 2026-03-26 02:06:05.031916 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2026-03-26 02:06:05.031926 | orchestrator | Thursday 26 March 2026 02:05:53 +0000 (0:00:00.464) 0:05:17.253 ******** 2026-03-26 02:06:05.031935 | orchestrator | changed: [testbed-manager] 2026-03-26 02:06:05.031946 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:06:05.031954 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:06:05.031963 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:06:05.031972 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:06:05.031981 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:06:05.031988 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:06:05.031997 | orchestrator | 2026-03-26 02:06:05.032006 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2026-03-26 02:06:05.032016 | orchestrator | Thursday 26 March 2026 02:05:53 +0000 (0:00:00.750) 0:05:18.003 ******** 2026-03-26 02:06:05.032025 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:06:05.032035 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:06:05.032070 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:06:05.032079 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:06:05.032088 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:06:05.032097 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:06:05.032103 | orchestrator | ok: [testbed-manager] 2026-03-26 02:06:05.032109 | orchestrator | 2026-03-26 02:06:05.032115 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2026-03-26 02:06:05.032121 | orchestrator | Thursday 26 March 2026 02:05:55 +0000 (0:00:01.731) 0:05:19.735 ******** 2026-03-26 02:06:05.032127 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:06:05.032132 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:06:05.032138 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:06:05.032143 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:06:05.032149 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:06:05.032155 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:06:05.032160 | orchestrator | changed: [testbed-manager] 2026-03-26 02:06:05.032166 | orchestrator | 2026-03-26 02:06:05.032171 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2026-03-26 02:06:05.032177 | orchestrator | Thursday 26 March 2026 02:05:56 +0000 (0:00:00.814) 0:05:20.550 ******** 2026-03-26 02:06:05.032199 | orchestrator | skipping: [testbed-manager] 2026-03-26 02:06:05.032204 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:06:05.032210 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:06:05.032215 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:06:05.032221 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:06:05.032226 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:06:05.032231 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:06:05.032237 | orchestrator | 2026-03-26 02:06:05.032242 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2026-03-26 02:06:05.032247 | orchestrator | Thursday 26 March 2026 02:05:56 +0000 (0:00:00.365) 0:05:20.915 ******** 2026-03-26 02:06:05.032253 | orchestrator | skipping: [testbed-manager] 2026-03-26 02:06:05.032258 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:06:05.032264 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:06:05.032281 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:06:05.032287 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:06:05.032292 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:06:05.032297 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:06:05.032303 | orchestrator | 2026-03-26 02:06:05.032308 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2026-03-26 02:06:05.032313 | orchestrator | Thursday 26 March 2026 02:05:57 +0000 (0:00:00.500) 0:05:21.416 ******** 2026-03-26 02:06:05.032319 | orchestrator | ok: [testbed-manager] 2026-03-26 02:06:05.032324 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:06:05.032329 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:06:05.032335 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:06:05.032340 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:06:05.032345 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:06:05.032351 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:06:05.032356 | orchestrator | 2026-03-26 02:06:05.032362 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2026-03-26 02:06:05.032367 | orchestrator | Thursday 26 March 2026 02:05:57 +0000 (0:00:00.403) 0:05:21.819 ******** 2026-03-26 02:06:05.032374 | orchestrator | skipping: [testbed-manager] 2026-03-26 02:06:05.032381 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:06:05.032388 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:06:05.032394 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:06:05.032401 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:06:05.032407 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:06:05.032414 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:06:05.032420 | orchestrator | 2026-03-26 02:06:05.032427 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2026-03-26 02:06:05.032434 | orchestrator | Thursday 26 March 2026 02:05:58 +0000 (0:00:00.364) 0:05:22.183 ******** 2026-03-26 02:06:05.032442 | orchestrator | ok: [testbed-manager] 2026-03-26 02:06:05.032448 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:06:05.032457 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:06:05.032466 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:06:05.032474 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:06:05.032486 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:06:05.032497 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:06:05.032508 | orchestrator | 2026-03-26 02:06:05.032516 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2026-03-26 02:06:05.032525 | orchestrator | Thursday 26 March 2026 02:05:58 +0000 (0:00:00.356) 0:05:22.539 ******** 2026-03-26 02:06:05.032533 | orchestrator | ok: [testbed-manager] =>  2026-03-26 02:06:05.032542 | orchestrator |  docker_version: 5:27.5.1 2026-03-26 02:06:05.032550 | orchestrator | ok: [testbed-node-3] =>  2026-03-26 02:06:05.032559 | orchestrator |  docker_version: 5:27.5.1 2026-03-26 02:06:05.032567 | orchestrator | ok: [testbed-node-4] =>  2026-03-26 02:06:05.032575 | orchestrator |  docker_version: 5:27.5.1 2026-03-26 02:06:05.032583 | orchestrator | ok: [testbed-node-5] =>  2026-03-26 02:06:05.032592 | orchestrator |  docker_version: 5:27.5.1 2026-03-26 02:06:05.032618 | orchestrator | ok: [testbed-node-0] =>  2026-03-26 02:06:05.032636 | orchestrator |  docker_version: 5:27.5.1 2026-03-26 02:06:05.032642 | orchestrator | ok: [testbed-node-1] =>  2026-03-26 02:06:05.032647 | orchestrator |  docker_version: 5:27.5.1 2026-03-26 02:06:05.032653 | orchestrator | ok: [testbed-node-2] =>  2026-03-26 02:06:05.032658 | orchestrator |  docker_version: 5:27.5.1 2026-03-26 02:06:05.032663 | orchestrator | 2026-03-26 02:06:05.032669 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2026-03-26 02:06:05.032674 | orchestrator | Thursday 26 March 2026 02:05:58 +0000 (0:00:00.331) 0:05:22.871 ******** 2026-03-26 02:06:05.032680 | orchestrator | ok: [testbed-manager] =>  2026-03-26 02:06:05.032685 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-26 02:06:05.032691 | orchestrator | ok: [testbed-node-3] =>  2026-03-26 02:06:05.032696 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-26 02:06:05.032702 | orchestrator | ok: [testbed-node-4] =>  2026-03-26 02:06:05.032707 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-26 02:06:05.032712 | orchestrator | ok: [testbed-node-5] =>  2026-03-26 02:06:05.032718 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-26 02:06:05.032723 | orchestrator | ok: [testbed-node-0] =>  2026-03-26 02:06:05.032728 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-26 02:06:05.032734 | orchestrator | ok: [testbed-node-1] =>  2026-03-26 02:06:05.032739 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-26 02:06:05.032744 | orchestrator | ok: [testbed-node-2] =>  2026-03-26 02:06:05.032750 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-26 02:06:05.032755 | orchestrator | 2026-03-26 02:06:05.032761 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2026-03-26 02:06:05.032766 | orchestrator | Thursday 26 March 2026 02:05:59 +0000 (0:00:00.349) 0:05:23.221 ******** 2026-03-26 02:06:05.032772 | orchestrator | skipping: [testbed-manager] 2026-03-26 02:06:05.032777 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:06:05.032783 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:06:05.032788 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:06:05.032793 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:06:05.032799 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:06:05.032804 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:06:05.032809 | orchestrator | 2026-03-26 02:06:05.032815 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2026-03-26 02:06:05.032821 | orchestrator | Thursday 26 March 2026 02:05:59 +0000 (0:00:00.327) 0:05:23.548 ******** 2026-03-26 02:06:05.032826 | orchestrator | skipping: [testbed-manager] 2026-03-26 02:06:05.032831 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:06:05.032837 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:06:05.032842 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:06:05.032848 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:06:05.032853 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:06:05.032858 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:06:05.032864 | orchestrator | 2026-03-26 02:06:05.032869 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2026-03-26 02:06:05.032875 | orchestrator | Thursday 26 March 2026 02:05:59 +0000 (0:00:00.306) 0:05:23.854 ******** 2026-03-26 02:06:05.032882 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 02:06:05.032889 | orchestrator | 2026-03-26 02:06:05.032899 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2026-03-26 02:06:05.032904 | orchestrator | Thursday 26 March 2026 02:06:00 +0000 (0:00:00.433) 0:05:24.288 ******** 2026-03-26 02:06:05.032910 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:06:05.032915 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:06:05.032921 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:06:05.032926 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:06:05.032932 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:06:05.032941 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:06:05.032946 | orchestrator | ok: [testbed-manager] 2026-03-26 02:06:05.032952 | orchestrator | 2026-03-26 02:06:05.032957 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2026-03-26 02:06:05.032963 | orchestrator | Thursday 26 March 2026 02:06:01 +0000 (0:00:01.019) 0:05:25.308 ******** 2026-03-26 02:06:05.032968 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:06:05.032973 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:06:05.032979 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:06:05.032984 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:06:05.032990 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:06:05.032995 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:06:05.033000 | orchestrator | ok: [testbed-manager] 2026-03-26 02:06:05.033006 | orchestrator | 2026-03-26 02:06:05.033011 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2026-03-26 02:06:05.033018 | orchestrator | Thursday 26 March 2026 02:06:04 +0000 (0:00:03.173) 0:05:28.482 ******** 2026-03-26 02:06:05.033023 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2026-03-26 02:06:05.033029 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2026-03-26 02:06:05.033035 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2026-03-26 02:06:05.033040 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2026-03-26 02:06:05.033061 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2026-03-26 02:06:05.033067 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2026-03-26 02:06:05.033072 | orchestrator | skipping: [testbed-manager] 2026-03-26 02:06:05.033078 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2026-03-26 02:06:05.033083 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2026-03-26 02:06:05.033089 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:06:05.033094 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2026-03-26 02:06:05.033100 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2026-03-26 02:06:05.033105 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2026-03-26 02:06:05.033110 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2026-03-26 02:06:05.033116 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:06:05.033121 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2026-03-26 02:06:05.033131 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:07:04.081583 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2026-03-26 02:07:04.081699 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2026-03-26 02:07:04.081714 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2026-03-26 02:07:04.081727 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2026-03-26 02:07:04.081738 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2026-03-26 02:07:04.081749 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:07:04.081762 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:07:04.081773 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2026-03-26 02:07:04.081784 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2026-03-26 02:07:04.081795 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2026-03-26 02:07:04.081805 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:07:04.081817 | orchestrator | 2026-03-26 02:07:04.081828 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2026-03-26 02:07:04.081841 | orchestrator | Thursday 26 March 2026 02:06:05 +0000 (0:00:00.724) 0:05:29.207 ******** 2026-03-26 02:07:04.081852 | orchestrator | ok: [testbed-manager] 2026-03-26 02:07:04.081863 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:07:04.081874 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:07:04.081885 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:07:04.081897 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:07:04.081908 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:07:04.081942 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:07:04.081954 | orchestrator | 2026-03-26 02:07:04.081965 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2026-03-26 02:07:04.081976 | orchestrator | Thursday 26 March 2026 02:06:11 +0000 (0:00:06.691) 0:05:35.898 ******** 2026-03-26 02:07:04.081987 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:07:04.081998 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:07:04.082009 | orchestrator | ok: [testbed-manager] 2026-03-26 02:07:04.082086 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:07:04.082098 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:07:04.082108 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:07:04.082123 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:07:04.082137 | orchestrator | 2026-03-26 02:07:04.082182 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2026-03-26 02:07:04.082200 | orchestrator | Thursday 26 March 2026 02:06:13 +0000 (0:00:01.151) 0:05:37.050 ******** 2026-03-26 02:07:04.082219 | orchestrator | ok: [testbed-manager] 2026-03-26 02:07:04.082237 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:07:04.082255 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:07:04.082274 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:07:04.082292 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:07:04.082311 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:07:04.082328 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:07:04.082347 | orchestrator | 2026-03-26 02:07:04.082359 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2026-03-26 02:07:04.082370 | orchestrator | Thursday 26 March 2026 02:06:21 +0000 (0:00:08.358) 0:05:45.408 ******** 2026-03-26 02:07:04.082381 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:07:04.082392 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:07:04.082402 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:07:04.082413 | orchestrator | changed: [testbed-manager] 2026-03-26 02:07:04.082424 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:07:04.082434 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:07:04.082445 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:07:04.082456 | orchestrator | 2026-03-26 02:07:04.082467 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2026-03-26 02:07:04.082478 | orchestrator | Thursday 26 March 2026 02:06:24 +0000 (0:00:03.275) 0:05:48.684 ******** 2026-03-26 02:07:04.082489 | orchestrator | ok: [testbed-manager] 2026-03-26 02:07:04.082499 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:07:04.082510 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:07:04.082521 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:07:04.082532 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:07:04.082542 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:07:04.082553 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:07:04.082564 | orchestrator | 2026-03-26 02:07:04.082574 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2026-03-26 02:07:04.082585 | orchestrator | Thursday 26 March 2026 02:06:26 +0000 (0:00:01.393) 0:05:50.077 ******** 2026-03-26 02:07:04.082596 | orchestrator | ok: [testbed-manager] 2026-03-26 02:07:04.082607 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:07:04.082618 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:07:04.082628 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:07:04.082639 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:07:04.082650 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:07:04.082661 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:07:04.082672 | orchestrator | 2026-03-26 02:07:04.082682 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2026-03-26 02:07:04.082693 | orchestrator | Thursday 26 March 2026 02:06:27 +0000 (0:00:01.803) 0:05:51.881 ******** 2026-03-26 02:07:04.082704 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:07:04.082715 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:07:04.082725 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:07:04.082736 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:07:04.082758 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:07:04.082769 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:07:04.082780 | orchestrator | changed: [testbed-manager] 2026-03-26 02:07:04.082790 | orchestrator | 2026-03-26 02:07:04.082801 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2026-03-26 02:07:04.082812 | orchestrator | Thursday 26 March 2026 02:06:28 +0000 (0:00:00.661) 0:05:52.542 ******** 2026-03-26 02:07:04.082823 | orchestrator | ok: [testbed-manager] 2026-03-26 02:07:04.082834 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:07:04.082845 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:07:04.082855 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:07:04.082866 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:07:04.082877 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:07:04.082887 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:07:04.082898 | orchestrator | 2026-03-26 02:07:04.082909 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2026-03-26 02:07:04.082940 | orchestrator | Thursday 26 March 2026 02:06:37 +0000 (0:00:08.552) 0:06:01.095 ******** 2026-03-26 02:07:04.082952 | orchestrator | changed: [testbed-manager] 2026-03-26 02:07:04.082963 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:07:04.082974 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:07:04.082984 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:07:04.082995 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:07:04.083006 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:07:04.083016 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:07:04.083027 | orchestrator | 2026-03-26 02:07:04.083038 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2026-03-26 02:07:04.083049 | orchestrator | Thursday 26 March 2026 02:06:38 +0000 (0:00:01.018) 0:06:02.114 ******** 2026-03-26 02:07:04.083060 | orchestrator | ok: [testbed-manager] 2026-03-26 02:07:04.083071 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:07:04.083082 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:07:04.083092 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:07:04.083103 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:07:04.083114 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:07:04.083124 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:07:04.083135 | orchestrator | 2026-03-26 02:07:04.083230 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2026-03-26 02:07:04.083247 | orchestrator | Thursday 26 March 2026 02:06:46 +0000 (0:00:08.642) 0:06:10.756 ******** 2026-03-26 02:07:04.083259 | orchestrator | ok: [testbed-manager] 2026-03-26 02:07:04.083269 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:07:04.083280 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:07:04.083291 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:07:04.083301 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:07:04.083312 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:07:04.083323 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:07:04.083334 | orchestrator | 2026-03-26 02:07:04.083344 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2026-03-26 02:07:04.083355 | orchestrator | Thursday 26 March 2026 02:06:57 +0000 (0:00:10.659) 0:06:21.415 ******** 2026-03-26 02:07:04.083366 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2026-03-26 02:07:04.083376 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2026-03-26 02:07:04.083385 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2026-03-26 02:07:04.083395 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2026-03-26 02:07:04.083404 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2026-03-26 02:07:04.083414 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2026-03-26 02:07:04.083423 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2026-03-26 02:07:04.083433 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2026-03-26 02:07:04.083442 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2026-03-26 02:07:04.083460 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2026-03-26 02:07:04.083469 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2026-03-26 02:07:04.083525 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2026-03-26 02:07:04.083537 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2026-03-26 02:07:04.083546 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2026-03-26 02:07:04.083556 | orchestrator | 2026-03-26 02:07:04.083565 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2026-03-26 02:07:04.083575 | orchestrator | Thursday 26 March 2026 02:06:58 +0000 (0:00:01.231) 0:06:22.647 ******** 2026-03-26 02:07:04.083589 | orchestrator | skipping: [testbed-manager] 2026-03-26 02:07:04.083599 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:07:04.083608 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:07:04.083618 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:07:04.083627 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:07:04.083637 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:07:04.083646 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:07:04.083656 | orchestrator | 2026-03-26 02:07:04.083665 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2026-03-26 02:07:04.083675 | orchestrator | Thursday 26 March 2026 02:06:59 +0000 (0:00:00.659) 0:06:23.306 ******** 2026-03-26 02:07:04.083685 | orchestrator | ok: [testbed-manager] 2026-03-26 02:07:04.083694 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:07:04.083704 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:07:04.083713 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:07:04.083723 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:07:04.083733 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:07:04.083742 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:07:04.083752 | orchestrator | 2026-03-26 02:07:04.083761 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2026-03-26 02:07:04.083772 | orchestrator | Thursday 26 March 2026 02:07:03 +0000 (0:00:03.719) 0:06:27.026 ******** 2026-03-26 02:07:04.083782 | orchestrator | skipping: [testbed-manager] 2026-03-26 02:07:04.083791 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:07:04.083801 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:07:04.083810 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:07:04.083820 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:07:04.083829 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:07:04.083838 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:07:04.083848 | orchestrator | 2026-03-26 02:07:04.083858 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2026-03-26 02:07:04.083868 | orchestrator | Thursday 26 March 2026 02:07:03 +0000 (0:00:00.547) 0:06:27.574 ******** 2026-03-26 02:07:04.083878 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2026-03-26 02:07:04.083887 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2026-03-26 02:07:04.083897 | orchestrator | skipping: [testbed-manager] 2026-03-26 02:07:04.083906 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2026-03-26 02:07:04.083916 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2026-03-26 02:07:04.083925 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:07:04.083935 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2026-03-26 02:07:04.083944 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2026-03-26 02:07:04.083954 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:07:04.083972 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2026-03-26 02:07:25.220091 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2026-03-26 02:07:25.220257 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:07:25.220276 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2026-03-26 02:07:25.220287 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2026-03-26 02:07:25.220298 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:07:25.220330 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2026-03-26 02:07:25.220341 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2026-03-26 02:07:25.220351 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:07:25.220361 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2026-03-26 02:07:25.220370 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2026-03-26 02:07:25.220380 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:07:25.220390 | orchestrator | 2026-03-26 02:07:25.220401 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2026-03-26 02:07:25.220412 | orchestrator | Thursday 26 March 2026 02:07:04 +0000 (0:00:00.853) 0:06:28.427 ******** 2026-03-26 02:07:25.220422 | orchestrator | skipping: [testbed-manager] 2026-03-26 02:07:25.220431 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:07:25.220441 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:07:25.220450 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:07:25.220460 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:07:25.220469 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:07:25.220479 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:07:25.220489 | orchestrator | 2026-03-26 02:07:25.220498 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2026-03-26 02:07:25.220508 | orchestrator | Thursday 26 March 2026 02:07:04 +0000 (0:00:00.579) 0:06:29.006 ******** 2026-03-26 02:07:25.220518 | orchestrator | skipping: [testbed-manager] 2026-03-26 02:07:25.220527 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:07:25.220536 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:07:25.220546 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:07:25.220555 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:07:25.220565 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:07:25.220574 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:07:25.220584 | orchestrator | 2026-03-26 02:07:25.220593 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2026-03-26 02:07:25.220603 | orchestrator | Thursday 26 March 2026 02:07:05 +0000 (0:00:00.498) 0:06:29.504 ******** 2026-03-26 02:07:25.220616 | orchestrator | skipping: [testbed-manager] 2026-03-26 02:07:25.220627 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:07:25.220638 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:07:25.220650 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:07:25.220661 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:07:25.220671 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:07:25.220682 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:07:25.220693 | orchestrator | 2026-03-26 02:07:25.220704 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2026-03-26 02:07:25.220716 | orchestrator | Thursday 26 March 2026 02:07:06 +0000 (0:00:00.579) 0:06:30.084 ******** 2026-03-26 02:07:25.220734 | orchestrator | ok: [testbed-manager] 2026-03-26 02:07:25.220750 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:07:25.220766 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:07:25.220782 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:07:25.220799 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:07:25.220815 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:07:25.220830 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:07:25.220846 | orchestrator | 2026-03-26 02:07:25.220861 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2026-03-26 02:07:25.220876 | orchestrator | Thursday 26 March 2026 02:07:08 +0000 (0:00:02.001) 0:06:32.086 ******** 2026-03-26 02:07:25.220894 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 02:07:25.220915 | orchestrator | 2026-03-26 02:07:25.220933 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2026-03-26 02:07:25.220953 | orchestrator | Thursday 26 March 2026 02:07:09 +0000 (0:00:01.015) 0:06:33.101 ******** 2026-03-26 02:07:25.220993 | orchestrator | ok: [testbed-manager] 2026-03-26 02:07:25.221011 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:07:25.221027 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:07:25.221043 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:07:25.221060 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:07:25.221077 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:07:25.221093 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:07:25.221109 | orchestrator | 2026-03-26 02:07:25.221125 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2026-03-26 02:07:25.221135 | orchestrator | Thursday 26 March 2026 02:07:09 +0000 (0:00:00.889) 0:06:33.991 ******** 2026-03-26 02:07:25.221145 | orchestrator | ok: [testbed-manager] 2026-03-26 02:07:25.221155 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:07:25.221164 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:07:25.221198 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:07:25.221208 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:07:25.221217 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:07:25.221227 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:07:25.221237 | orchestrator | 2026-03-26 02:07:25.221247 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2026-03-26 02:07:25.221256 | orchestrator | Thursday 26 March 2026 02:07:10 +0000 (0:00:00.846) 0:06:34.838 ******** 2026-03-26 02:07:25.221266 | orchestrator | ok: [testbed-manager] 2026-03-26 02:07:25.221276 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:07:25.221285 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:07:25.221295 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:07:25.221305 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:07:25.221321 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:07:25.221337 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:07:25.221353 | orchestrator | 2026-03-26 02:07:25.221367 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2026-03-26 02:07:25.221405 | orchestrator | Thursday 26 March 2026 02:07:12 +0000 (0:00:01.747) 0:06:36.585 ******** 2026-03-26 02:07:25.221421 | orchestrator | skipping: [testbed-manager] 2026-03-26 02:07:25.221436 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:07:25.221452 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:07:25.221470 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:07:25.221487 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:07:25.221502 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:07:25.221517 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:07:25.221527 | orchestrator | 2026-03-26 02:07:25.221537 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2026-03-26 02:07:25.221546 | orchestrator | Thursday 26 March 2026 02:07:14 +0000 (0:00:01.485) 0:06:38.070 ******** 2026-03-26 02:07:25.221556 | orchestrator | ok: [testbed-manager] 2026-03-26 02:07:25.221566 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:07:25.221575 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:07:25.221585 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:07:25.221594 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:07:25.221604 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:07:25.221614 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:07:25.221623 | orchestrator | 2026-03-26 02:07:25.221633 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2026-03-26 02:07:25.221643 | orchestrator | Thursday 26 March 2026 02:07:15 +0000 (0:00:01.332) 0:06:39.403 ******** 2026-03-26 02:07:25.221653 | orchestrator | changed: [testbed-manager] 2026-03-26 02:07:25.221662 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:07:25.221671 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:07:25.221681 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:07:25.221690 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:07:25.221700 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:07:25.221709 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:07:25.221719 | orchestrator | 2026-03-26 02:07:25.221738 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2026-03-26 02:07:25.221748 | orchestrator | Thursday 26 March 2026 02:07:16 +0000 (0:00:01.554) 0:06:40.958 ******** 2026-03-26 02:07:25.221758 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 02:07:25.221768 | orchestrator | 2026-03-26 02:07:25.221778 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2026-03-26 02:07:25.221788 | orchestrator | Thursday 26 March 2026 02:07:18 +0000 (0:00:01.118) 0:06:42.076 ******** 2026-03-26 02:07:25.221797 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:07:25.221807 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:07:25.221817 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:07:25.221826 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:07:25.221842 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:07:25.221856 | orchestrator | ok: [testbed-manager] 2026-03-26 02:07:25.221872 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:07:25.221889 | orchestrator | 2026-03-26 02:07:25.221906 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2026-03-26 02:07:25.221922 | orchestrator | Thursday 26 March 2026 02:07:19 +0000 (0:00:01.324) 0:06:43.400 ******** 2026-03-26 02:07:25.221933 | orchestrator | ok: [testbed-manager] 2026-03-26 02:07:25.221944 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:07:25.221960 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:07:25.221978 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:07:25.221993 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:07:25.222111 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:07:25.222134 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:07:25.222149 | orchestrator | 2026-03-26 02:07:25.222164 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2026-03-26 02:07:25.222217 | orchestrator | Thursday 26 March 2026 02:07:21 +0000 (0:00:02.024) 0:06:45.425 ******** 2026-03-26 02:07:25.222233 | orchestrator | ok: [testbed-manager] 2026-03-26 02:07:25.222247 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:07:25.222262 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:07:25.222279 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:07:25.222295 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:07:25.222312 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:07:25.222328 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:07:25.222344 | orchestrator | 2026-03-26 02:07:25.222361 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2026-03-26 02:07:25.222377 | orchestrator | Thursday 26 March 2026 02:07:22 +0000 (0:00:01.150) 0:06:46.576 ******** 2026-03-26 02:07:25.222393 | orchestrator | ok: [testbed-manager] 2026-03-26 02:07:25.222408 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:07:25.222424 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:07:25.222440 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:07:25.222454 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:07:25.222471 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:07:25.222488 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:07:25.222505 | orchestrator | 2026-03-26 02:07:25.222521 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2026-03-26 02:07:25.222535 | orchestrator | Thursday 26 March 2026 02:07:23 +0000 (0:00:01.315) 0:06:47.891 ******** 2026-03-26 02:07:25.222546 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 02:07:25.222556 | orchestrator | 2026-03-26 02:07:25.222566 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-26 02:07:25.222583 | orchestrator | Thursday 26 March 2026 02:07:24 +0000 (0:00:01.010) 0:06:48.902 ******** 2026-03-26 02:07:25.222599 | orchestrator | 2026-03-26 02:07:25.222615 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-26 02:07:25.222647 | orchestrator | Thursday 26 March 2026 02:07:24 +0000 (0:00:00.043) 0:06:48.946 ******** 2026-03-26 02:07:25.222663 | orchestrator | 2026-03-26 02:07:25.222680 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-26 02:07:25.222691 | orchestrator | Thursday 26 March 2026 02:07:24 +0000 (0:00:00.049) 0:06:48.995 ******** 2026-03-26 02:07:25.222700 | orchestrator | 2026-03-26 02:07:25.222710 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-26 02:07:25.222734 | orchestrator | Thursday 26 March 2026 02:07:25 +0000 (0:00:00.043) 0:06:49.039 ******** 2026-03-26 02:07:52.290279 | orchestrator | 2026-03-26 02:07:52.290362 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-26 02:07:52.290370 | orchestrator | Thursday 26 March 2026 02:07:25 +0000 (0:00:00.048) 0:06:49.088 ******** 2026-03-26 02:07:52.290375 | orchestrator | 2026-03-26 02:07:52.290379 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-26 02:07:52.290384 | orchestrator | Thursday 26 March 2026 02:07:25 +0000 (0:00:00.048) 0:06:49.136 ******** 2026-03-26 02:07:52.290388 | orchestrator | 2026-03-26 02:07:52.290392 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-26 02:07:52.290396 | orchestrator | Thursday 26 March 2026 02:07:25 +0000 (0:00:00.040) 0:06:49.177 ******** 2026-03-26 02:07:52.290399 | orchestrator | 2026-03-26 02:07:52.290403 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-03-26 02:07:52.290407 | orchestrator | Thursday 26 March 2026 02:07:25 +0000 (0:00:00.042) 0:06:49.220 ******** 2026-03-26 02:07:52.290411 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:07:52.290416 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:07:52.290420 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:07:52.290424 | orchestrator | 2026-03-26 02:07:52.290427 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2026-03-26 02:07:52.290431 | orchestrator | Thursday 26 March 2026 02:07:26 +0000 (0:00:01.242) 0:06:50.463 ******** 2026-03-26 02:07:52.290435 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:07:52.290440 | orchestrator | changed: [testbed-manager] 2026-03-26 02:07:52.290443 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:07:52.290447 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:07:52.290451 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:07:52.290455 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:07:52.290459 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:07:52.290462 | orchestrator | 2026-03-26 02:07:52.290466 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart logrotate service] *********** 2026-03-26 02:07:52.290470 | orchestrator | Thursday 26 March 2026 02:07:28 +0000 (0:00:01.569) 0:06:52.032 ******** 2026-03-26 02:07:52.290474 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:07:52.290478 | orchestrator | changed: [testbed-manager] 2026-03-26 02:07:52.290481 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:07:52.290485 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:07:52.290489 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:07:52.290492 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:07:52.290496 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:07:52.290500 | orchestrator | 2026-03-26 02:07:52.290504 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2026-03-26 02:07:52.290507 | orchestrator | Thursday 26 March 2026 02:07:29 +0000 (0:00:01.262) 0:06:53.294 ******** 2026-03-26 02:07:52.290511 | orchestrator | skipping: [testbed-manager] 2026-03-26 02:07:52.290515 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:07:52.290519 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:07:52.290523 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:07:52.290526 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:07:52.290530 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:07:52.290534 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:07:52.290538 | orchestrator | 2026-03-26 02:07:52.290541 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2026-03-26 02:07:52.290545 | orchestrator | Thursday 26 March 2026 02:07:31 +0000 (0:00:02.423) 0:06:55.717 ******** 2026-03-26 02:07:52.290566 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:07:52.290570 | orchestrator | 2026-03-26 02:07:52.290586 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2026-03-26 02:07:52.290590 | orchestrator | Thursday 26 March 2026 02:07:31 +0000 (0:00:00.106) 0:06:55.824 ******** 2026-03-26 02:07:52.290594 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:07:52.290597 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:07:52.290601 | orchestrator | ok: [testbed-manager] 2026-03-26 02:07:52.290613 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:07:52.290617 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:07:52.290621 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:07:52.290624 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:07:52.290628 | orchestrator | 2026-03-26 02:07:52.290632 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2026-03-26 02:07:52.290637 | orchestrator | Thursday 26 March 2026 02:07:32 +0000 (0:00:01.117) 0:06:56.942 ******** 2026-03-26 02:07:52.290640 | orchestrator | skipping: [testbed-manager] 2026-03-26 02:07:52.290644 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:07:52.290648 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:07:52.290652 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:07:52.290655 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:07:52.290659 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:07:52.290663 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:07:52.290666 | orchestrator | 2026-03-26 02:07:52.290670 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2026-03-26 02:07:52.290674 | orchestrator | Thursday 26 March 2026 02:07:33 +0000 (0:00:00.632) 0:06:57.574 ******** 2026-03-26 02:07:52.290678 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 02:07:52.290684 | orchestrator | 2026-03-26 02:07:52.290688 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2026-03-26 02:07:52.290692 | orchestrator | Thursday 26 March 2026 02:07:34 +0000 (0:00:01.197) 0:06:58.772 ******** 2026-03-26 02:07:52.290696 | orchestrator | ok: [testbed-manager] 2026-03-26 02:07:52.290700 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:07:52.290703 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:07:52.290707 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:07:52.290711 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:07:52.290714 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:07:52.290718 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:07:52.290722 | orchestrator | 2026-03-26 02:07:52.290726 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2026-03-26 02:07:52.290730 | orchestrator | Thursday 26 March 2026 02:07:35 +0000 (0:00:00.853) 0:06:59.625 ******** 2026-03-26 02:07:52.290734 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2026-03-26 02:07:52.290747 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2026-03-26 02:07:52.290752 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2026-03-26 02:07:52.290755 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2026-03-26 02:07:52.290761 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2026-03-26 02:07:52.290767 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2026-03-26 02:07:52.290773 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2026-03-26 02:07:52.290779 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2026-03-26 02:07:52.290784 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2026-03-26 02:07:52.290790 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2026-03-26 02:07:52.290795 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2026-03-26 02:07:52.290801 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2026-03-26 02:07:52.290811 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2026-03-26 02:07:52.290817 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2026-03-26 02:07:52.290823 | orchestrator | 2026-03-26 02:07:52.290829 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2026-03-26 02:07:52.290835 | orchestrator | Thursday 26 March 2026 02:07:38 +0000 (0:00:02.520) 0:07:02.146 ******** 2026-03-26 02:07:52.290841 | orchestrator | skipping: [testbed-manager] 2026-03-26 02:07:52.290848 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:07:52.290855 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:07:52.290861 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:07:52.290867 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:07:52.290873 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:07:52.290879 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:07:52.290885 | orchestrator | 2026-03-26 02:07:52.290891 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2026-03-26 02:07:52.290897 | orchestrator | Thursday 26 March 2026 02:07:38 +0000 (0:00:00.720) 0:07:02.866 ******** 2026-03-26 02:07:52.290905 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 02:07:52.290913 | orchestrator | 2026-03-26 02:07:52.290917 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2026-03-26 02:07:52.290922 | orchestrator | Thursday 26 March 2026 02:07:39 +0000 (0:00:00.835) 0:07:03.701 ******** 2026-03-26 02:07:52.290926 | orchestrator | ok: [testbed-manager] 2026-03-26 02:07:52.290931 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:07:52.290935 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:07:52.290940 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:07:52.290944 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:07:52.290948 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:07:52.290953 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:07:52.290957 | orchestrator | 2026-03-26 02:07:52.290961 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2026-03-26 02:07:52.290966 | orchestrator | Thursday 26 March 2026 02:07:40 +0000 (0:00:00.850) 0:07:04.552 ******** 2026-03-26 02:07:52.290974 | orchestrator | ok: [testbed-manager] 2026-03-26 02:07:52.290978 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:07:52.290983 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:07:52.290987 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:07:52.290991 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:07:52.290995 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:07:52.291000 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:07:52.291004 | orchestrator | 2026-03-26 02:07:52.291008 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2026-03-26 02:07:52.291013 | orchestrator | Thursday 26 March 2026 02:07:41 +0000 (0:00:01.104) 0:07:05.656 ******** 2026-03-26 02:07:52.291017 | orchestrator | skipping: [testbed-manager] 2026-03-26 02:07:52.291022 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:07:52.291026 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:07:52.291030 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:07:52.291035 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:07:52.291039 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:07:52.291043 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:07:52.291048 | orchestrator | 2026-03-26 02:07:52.291052 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2026-03-26 02:07:52.291057 | orchestrator | Thursday 26 March 2026 02:07:42 +0000 (0:00:00.544) 0:07:06.201 ******** 2026-03-26 02:07:52.291061 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:07:52.291066 | orchestrator | ok: [testbed-manager] 2026-03-26 02:07:52.291070 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:07:52.291074 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:07:52.291079 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:07:52.291087 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:07:52.291091 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:07:52.291096 | orchestrator | 2026-03-26 02:07:52.291100 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2026-03-26 02:07:52.291104 | orchestrator | Thursday 26 March 2026 02:07:43 +0000 (0:00:01.498) 0:07:07.700 ******** 2026-03-26 02:07:52.291109 | orchestrator | skipping: [testbed-manager] 2026-03-26 02:07:52.291113 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:07:52.291118 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:07:52.291122 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:07:52.291126 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:07:52.291131 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:07:52.291135 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:07:52.291140 | orchestrator | 2026-03-26 02:07:52.291144 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2026-03-26 02:07:52.291148 | orchestrator | Thursday 26 March 2026 02:07:44 +0000 (0:00:00.616) 0:07:08.316 ******** 2026-03-26 02:07:52.291153 | orchestrator | ok: [testbed-manager] 2026-03-26 02:07:52.291157 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:07:52.291161 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:07:52.291166 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:07:52.291170 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:07:52.291175 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:07:52.291183 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:08:25.572987 | orchestrator | 2026-03-26 02:08:25.573127 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2026-03-26 02:08:25.573156 | orchestrator | Thursday 26 March 2026 02:07:52 +0000 (0:00:07.978) 0:07:16.295 ******** 2026-03-26 02:08:25.573175 | orchestrator | ok: [testbed-manager] 2026-03-26 02:08:25.573195 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:08:25.573215 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:08:25.573234 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:08:25.573253 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:08:25.573335 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:08:25.573353 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:08:25.573371 | orchestrator | 2026-03-26 02:08:25.573383 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2026-03-26 02:08:25.573395 | orchestrator | Thursday 26 March 2026 02:07:53 +0000 (0:00:01.620) 0:07:17.916 ******** 2026-03-26 02:08:25.573410 | orchestrator | ok: [testbed-manager] 2026-03-26 02:08:25.573429 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:08:25.573457 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:08:25.573477 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:08:25.573496 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:08:25.573514 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:08:25.573535 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:08:25.573554 | orchestrator | 2026-03-26 02:08:25.573574 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2026-03-26 02:08:25.573590 | orchestrator | Thursday 26 March 2026 02:07:55 +0000 (0:00:01.641) 0:07:19.557 ******** 2026-03-26 02:08:25.573603 | orchestrator | ok: [testbed-manager] 2026-03-26 02:08:25.573616 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:08:25.573629 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:08:25.573642 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:08:25.573656 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:08:25.573668 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:08:25.573681 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:08:25.573694 | orchestrator | 2026-03-26 02:08:25.573708 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-26 02:08:25.573720 | orchestrator | Thursday 26 March 2026 02:07:57 +0000 (0:00:01.627) 0:07:21.185 ******** 2026-03-26 02:08:25.573733 | orchestrator | ok: [testbed-manager] 2026-03-26 02:08:25.573747 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:08:25.573760 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:08:25.573802 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:08:25.573815 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:08:25.573828 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:08:25.573840 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:08:25.573852 | orchestrator | 2026-03-26 02:08:25.573863 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-26 02:08:25.573873 | orchestrator | Thursday 26 March 2026 02:07:58 +0000 (0:00:00.872) 0:07:22.057 ******** 2026-03-26 02:08:25.573883 | orchestrator | skipping: [testbed-manager] 2026-03-26 02:08:25.573894 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:08:25.573903 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:08:25.573913 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:08:25.573923 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:08:25.573932 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:08:25.573942 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:08:25.573951 | orchestrator | 2026-03-26 02:08:25.573961 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2026-03-26 02:08:25.573971 | orchestrator | Thursday 26 March 2026 02:07:59 +0000 (0:00:01.129) 0:07:23.186 ******** 2026-03-26 02:08:25.573981 | orchestrator | skipping: [testbed-manager] 2026-03-26 02:08:25.573990 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:08:25.574000 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:08:25.574009 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:08:25.574105 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:08:25.574117 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:08:25.574126 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:08:25.574136 | orchestrator | 2026-03-26 02:08:25.574146 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2026-03-26 02:08:25.574156 | orchestrator | Thursday 26 March 2026 02:07:59 +0000 (0:00:00.559) 0:07:23.746 ******** 2026-03-26 02:08:25.574248 | orchestrator | ok: [testbed-manager] 2026-03-26 02:08:25.574309 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:08:25.574321 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:08:25.574330 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:08:25.574340 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:08:25.574350 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:08:25.574359 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:08:25.574369 | orchestrator | 2026-03-26 02:08:25.574379 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2026-03-26 02:08:25.574389 | orchestrator | Thursday 26 March 2026 02:08:00 +0000 (0:00:00.580) 0:07:24.326 ******** 2026-03-26 02:08:25.574401 | orchestrator | ok: [testbed-manager] 2026-03-26 02:08:25.574418 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:08:25.574433 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:08:25.574450 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:08:25.574465 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:08:25.574483 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:08:25.574499 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:08:25.574515 | orchestrator | 2026-03-26 02:08:25.574526 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2026-03-26 02:08:25.574536 | orchestrator | Thursday 26 March 2026 02:08:01 +0000 (0:00:00.796) 0:07:25.123 ******** 2026-03-26 02:08:25.574545 | orchestrator | ok: [testbed-manager] 2026-03-26 02:08:25.574555 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:08:25.574564 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:08:25.574573 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:08:25.574583 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:08:25.574592 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:08:25.574602 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:08:25.574611 | orchestrator | 2026-03-26 02:08:25.574620 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2026-03-26 02:08:25.574630 | orchestrator | Thursday 26 March 2026 02:08:01 +0000 (0:00:00.570) 0:07:25.693 ******** 2026-03-26 02:08:25.574645 | orchestrator | ok: [testbed-manager] 2026-03-26 02:08:25.574661 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:08:25.574693 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:08:25.574708 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:08:25.574721 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:08:25.574736 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:08:25.574751 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:08:25.574766 | orchestrator | 2026-03-26 02:08:25.574812 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2026-03-26 02:08:25.574831 | orchestrator | Thursday 26 March 2026 02:08:07 +0000 (0:00:05.626) 0:07:31.319 ******** 2026-03-26 02:08:25.574846 | orchestrator | skipping: [testbed-manager] 2026-03-26 02:08:25.574868 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:08:25.574884 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:08:25.574900 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:08:25.574916 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:08:25.574930 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:08:25.574945 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:08:25.574960 | orchestrator | 2026-03-26 02:08:25.574977 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2026-03-26 02:08:25.574993 | orchestrator | Thursday 26 March 2026 02:08:07 +0000 (0:00:00.601) 0:07:31.921 ******** 2026-03-26 02:08:25.575013 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 02:08:25.575033 | orchestrator | 2026-03-26 02:08:25.575049 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2026-03-26 02:08:25.575064 | orchestrator | Thursday 26 March 2026 02:08:09 +0000 (0:00:01.100) 0:07:33.021 ******** 2026-03-26 02:08:25.575076 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:08:25.575092 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:08:25.575182 | orchestrator | ok: [testbed-manager] 2026-03-26 02:08:25.575196 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:08:25.575206 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:08:25.575215 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:08:25.575225 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:08:25.575234 | orchestrator | 2026-03-26 02:08:25.575244 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2026-03-26 02:08:25.575254 | orchestrator | Thursday 26 March 2026 02:08:10 +0000 (0:00:01.951) 0:07:34.973 ******** 2026-03-26 02:08:25.575301 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:08:25.575311 | orchestrator | ok: [testbed-manager] 2026-03-26 02:08:25.575321 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:08:25.575331 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:08:25.575340 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:08:25.575349 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:08:25.575359 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:08:25.575368 | orchestrator | 2026-03-26 02:08:25.575378 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2026-03-26 02:08:25.575388 | orchestrator | Thursday 26 March 2026 02:08:12 +0000 (0:00:01.256) 0:07:36.229 ******** 2026-03-26 02:08:25.575401 | orchestrator | ok: [testbed-manager] 2026-03-26 02:08:25.575418 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:08:25.575434 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:08:25.575450 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:08:25.575465 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:08:25.575481 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:08:25.575498 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:08:25.575515 | orchestrator | 2026-03-26 02:08:25.575531 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2026-03-26 02:08:25.575549 | orchestrator | Thursday 26 March 2026 02:08:13 +0000 (0:00:00.875) 0:07:37.105 ******** 2026-03-26 02:08:25.575571 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-26 02:08:25.575583 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-26 02:08:25.575606 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-26 02:08:25.575616 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-26 02:08:25.575625 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-26 02:08:25.575635 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-26 02:08:25.575644 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-26 02:08:25.575654 | orchestrator | 2026-03-26 02:08:25.575712 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2026-03-26 02:08:25.575723 | orchestrator | Thursday 26 March 2026 02:08:15 +0000 (0:00:02.118) 0:07:39.224 ******** 2026-03-26 02:08:25.575734 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 02:08:25.575744 | orchestrator | 2026-03-26 02:08:25.575754 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2026-03-26 02:08:25.575764 | orchestrator | Thursday 26 March 2026 02:08:16 +0000 (0:00:00.919) 0:07:40.143 ******** 2026-03-26 02:08:25.575773 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:08:25.575784 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:08:25.575793 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:08:25.575803 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:08:25.575813 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:08:25.575823 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:08:25.575832 | orchestrator | changed: [testbed-manager] 2026-03-26 02:08:25.575842 | orchestrator | 2026-03-26 02:08:25.575865 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2026-03-26 02:08:56.718495 | orchestrator | Thursday 26 March 2026 02:08:25 +0000 (0:00:09.433) 0:07:49.577 ******** 2026-03-26 02:08:56.718604 | orchestrator | ok: [testbed-manager] 2026-03-26 02:08:56.718621 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:08:56.718632 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:08:56.718644 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:08:56.718655 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:08:56.718665 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:08:56.718676 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:08:56.718688 | orchestrator | 2026-03-26 02:08:56.718700 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2026-03-26 02:08:56.718712 | orchestrator | Thursday 26 March 2026 02:08:27 +0000 (0:00:01.978) 0:07:51.555 ******** 2026-03-26 02:08:56.718723 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:08:56.718734 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:08:56.718746 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:08:56.718757 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:08:56.718768 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:08:56.718779 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:08:56.718790 | orchestrator | 2026-03-26 02:08:56.718801 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2026-03-26 02:08:56.718812 | orchestrator | Thursday 26 March 2026 02:08:28 +0000 (0:00:01.217) 0:07:52.773 ******** 2026-03-26 02:08:56.718823 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:08:56.718836 | orchestrator | changed: [testbed-manager] 2026-03-26 02:08:56.718847 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:08:56.718858 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:08:56.718869 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:08:56.718906 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:08:56.718917 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:08:56.718928 | orchestrator | 2026-03-26 02:08:56.718940 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2026-03-26 02:08:56.718954 | orchestrator | 2026-03-26 02:08:56.718967 | orchestrator | TASK [Include hardening role] ************************************************** 2026-03-26 02:08:56.718980 | orchestrator | Thursday 26 March 2026 02:08:30 +0000 (0:00:01.322) 0:07:54.096 ******** 2026-03-26 02:08:56.718993 | orchestrator | skipping: [testbed-manager] 2026-03-26 02:08:56.719006 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:08:56.719019 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:08:56.719032 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:08:56.719044 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:08:56.719058 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:08:56.719071 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:08:56.719084 | orchestrator | 2026-03-26 02:08:56.719095 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2026-03-26 02:08:56.719106 | orchestrator | 2026-03-26 02:08:56.719117 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2026-03-26 02:08:56.719129 | orchestrator | Thursday 26 March 2026 02:08:30 +0000 (0:00:00.768) 0:07:54.865 ******** 2026-03-26 02:08:56.719140 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:08:56.719151 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:08:56.719162 | orchestrator | changed: [testbed-manager] 2026-03-26 02:08:56.719173 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:08:56.719184 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:08:56.719195 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:08:56.719206 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:08:56.719216 | orchestrator | 2026-03-26 02:08:56.719228 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2026-03-26 02:08:56.719255 | orchestrator | Thursday 26 March 2026 02:08:32 +0000 (0:00:01.260) 0:07:56.126 ******** 2026-03-26 02:08:56.719266 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:08:56.719277 | orchestrator | ok: [testbed-manager] 2026-03-26 02:08:56.719288 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:08:56.719328 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:08:56.719341 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:08:56.719352 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:08:56.719362 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:08:56.719373 | orchestrator | 2026-03-26 02:08:56.719384 | orchestrator | TASK [Include auditd role] ***************************************************** 2026-03-26 02:08:56.719395 | orchestrator | Thursday 26 March 2026 02:08:33 +0000 (0:00:01.433) 0:07:57.559 ******** 2026-03-26 02:08:56.719406 | orchestrator | skipping: [testbed-manager] 2026-03-26 02:08:56.719417 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:08:56.719427 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:08:56.719438 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:08:56.719449 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:08:56.719460 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:08:56.719470 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:08:56.719481 | orchestrator | 2026-03-26 02:08:56.719492 | orchestrator | TASK [Include smartd role] ***************************************************** 2026-03-26 02:08:56.719503 | orchestrator | Thursday 26 March 2026 02:08:34 +0000 (0:00:00.536) 0:07:58.095 ******** 2026-03-26 02:08:56.719515 | orchestrator | included: osism.services.smartd for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 02:08:56.719527 | orchestrator | 2026-03-26 02:08:56.719538 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2026-03-26 02:08:56.719550 | orchestrator | Thursday 26 March 2026 02:08:35 +0000 (0:00:01.101) 0:07:59.197 ******** 2026-03-26 02:08:56.719561 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 02:08:56.719584 | orchestrator | 2026-03-26 02:08:56.719595 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2026-03-26 02:08:56.719606 | orchestrator | Thursday 26 March 2026 02:08:36 +0000 (0:00:00.910) 0:08:00.108 ******** 2026-03-26 02:08:56.719616 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:08:56.719627 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:08:56.719638 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:08:56.719649 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:08:56.719660 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:08:56.719670 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:08:56.719681 | orchestrator | changed: [testbed-manager] 2026-03-26 02:08:56.719692 | orchestrator | 2026-03-26 02:08:56.719721 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2026-03-26 02:08:56.719733 | orchestrator | Thursday 26 March 2026 02:08:44 +0000 (0:00:08.733) 0:08:08.841 ******** 2026-03-26 02:08:56.719744 | orchestrator | changed: [testbed-manager] 2026-03-26 02:08:56.719754 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:08:56.719765 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:08:56.719776 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:08:56.719787 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:08:56.719797 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:08:56.719808 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:08:56.719818 | orchestrator | 2026-03-26 02:08:56.719829 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2026-03-26 02:08:56.719840 | orchestrator | Thursday 26 March 2026 02:08:45 +0000 (0:00:00.879) 0:08:09.721 ******** 2026-03-26 02:08:56.719851 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:08:56.719862 | orchestrator | changed: [testbed-manager] 2026-03-26 02:08:56.719872 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:08:56.719883 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:08:56.719893 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:08:56.719904 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:08:56.719915 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:08:56.719925 | orchestrator | 2026-03-26 02:08:56.719936 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2026-03-26 02:08:56.719947 | orchestrator | Thursday 26 March 2026 02:08:47 +0000 (0:00:01.345) 0:08:11.067 ******** 2026-03-26 02:08:56.719957 | orchestrator | changed: [testbed-manager] 2026-03-26 02:08:56.719968 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:08:56.719979 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:08:56.719989 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:08:56.720000 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:08:56.720011 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:08:56.720021 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:08:56.720032 | orchestrator | 2026-03-26 02:08:56.720043 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2026-03-26 02:08:56.720054 | orchestrator | Thursday 26 March 2026 02:08:49 +0000 (0:00:02.143) 0:08:13.211 ******** 2026-03-26 02:08:56.720064 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:08:56.720075 | orchestrator | changed: [testbed-manager] 2026-03-26 02:08:56.720086 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:08:56.720096 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:08:56.720107 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:08:56.720117 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:08:56.720128 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:08:56.720139 | orchestrator | 2026-03-26 02:08:56.720149 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2026-03-26 02:08:56.720161 | orchestrator | Thursday 26 March 2026 02:08:50 +0000 (0:00:01.251) 0:08:14.462 ******** 2026-03-26 02:08:56.720171 | orchestrator | changed: [testbed-manager] 2026-03-26 02:08:56.720182 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:08:56.720199 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:08:56.720210 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:08:56.720221 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:08:56.720231 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:08:56.720242 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:08:56.720252 | orchestrator | 2026-03-26 02:08:56.720263 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2026-03-26 02:08:56.720274 | orchestrator | 2026-03-26 02:08:56.720291 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2026-03-26 02:08:56.720368 | orchestrator | Thursday 26 March 2026 02:08:51 +0000 (0:00:01.174) 0:08:15.636 ******** 2026-03-26 02:08:56.720388 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 02:08:56.720405 | orchestrator | 2026-03-26 02:08:56.720417 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-03-26 02:08:56.720428 | orchestrator | Thursday 26 March 2026 02:08:52 +0000 (0:00:00.861) 0:08:16.497 ******** 2026-03-26 02:08:56.720439 | orchestrator | ok: [testbed-manager] 2026-03-26 02:08:56.720450 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:08:56.720461 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:08:56.720472 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:08:56.720482 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:08:56.720493 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:08:56.720504 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:08:56.720515 | orchestrator | 2026-03-26 02:08:56.720526 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-03-26 02:08:56.720537 | orchestrator | Thursday 26 March 2026 02:08:53 +0000 (0:00:01.080) 0:08:17.578 ******** 2026-03-26 02:08:56.720548 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:08:56.720559 | orchestrator | changed: [testbed-manager] 2026-03-26 02:08:56.720570 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:08:56.720580 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:08:56.720591 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:08:56.720602 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:08:56.720613 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:08:56.720624 | orchestrator | 2026-03-26 02:08:56.720634 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2026-03-26 02:08:56.720645 | orchestrator | Thursday 26 March 2026 02:08:54 +0000 (0:00:01.166) 0:08:18.745 ******** 2026-03-26 02:08:56.720656 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 02:08:56.720668 | orchestrator | 2026-03-26 02:08:56.720678 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-03-26 02:08:56.720689 | orchestrator | Thursday 26 March 2026 02:08:55 +0000 (0:00:01.081) 0:08:19.826 ******** 2026-03-26 02:08:56.720700 | orchestrator | ok: [testbed-manager] 2026-03-26 02:08:56.720711 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:08:56.720722 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:08:56.720733 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:08:56.720743 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:08:56.720754 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:08:56.720765 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:08:56.720775 | orchestrator | 2026-03-26 02:08:56.720795 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-03-26 02:08:58.455572 | orchestrator | Thursday 26 March 2026 02:08:56 +0000 (0:00:00.897) 0:08:20.723 ******** 2026-03-26 02:08:58.455704 | orchestrator | changed: [testbed-manager] 2026-03-26 02:08:58.455728 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:08:58.455749 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:08:58.455768 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:08:58.455787 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:08:58.455806 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:08:58.455825 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:08:58.455876 | orchestrator | 2026-03-26 02:08:58.455897 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-26 02:08:58.455918 | orchestrator | testbed-manager : ok=168  changed=40  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-03-26 02:08:58.455938 | orchestrator | testbed-node-0 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-03-26 02:08:58.455958 | orchestrator | testbed-node-1 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-03-26 02:08:58.455977 | orchestrator | testbed-node-2 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-03-26 02:08:58.455997 | orchestrator | testbed-node-3 : ok=175  changed=65  unreachable=0 failed=0 skipped=38  rescued=0 ignored=0 2026-03-26 02:08:58.456016 | orchestrator | testbed-node-4 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-03-26 02:08:58.456034 | orchestrator | testbed-node-5 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-03-26 02:08:58.456053 | orchestrator | 2026-03-26 02:08:58.456072 | orchestrator | 2026-03-26 02:08:58.456091 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-26 02:08:58.456113 | orchestrator | Thursday 26 March 2026 02:08:57 +0000 (0:00:01.223) 0:08:21.947 ******** 2026-03-26 02:08:58.456133 | orchestrator | =============================================================================== 2026-03-26 02:08:58.456154 | orchestrator | osism.commons.packages : Install required packages --------------------- 74.82s 2026-03-26 02:08:58.456175 | orchestrator | osism.commons.packages : Download required packages -------------------- 44.82s 2026-03-26 02:08:58.456196 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 35.08s 2026-03-26 02:08:58.456216 | orchestrator | osism.commons.repository : Update package cache ------------------------ 16.03s 2026-03-26 02:08:58.456237 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 15.15s 2026-03-26 02:08:58.456276 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 13.50s 2026-03-26 02:08:58.456297 | orchestrator | osism.services.docker : Install docker package ------------------------- 10.66s 2026-03-26 02:08:58.456347 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.43s 2026-03-26 02:08:58.456365 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.73s 2026-03-26 02:08:58.456384 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 8.64s 2026-03-26 02:08:58.456405 | orchestrator | osism.services.docker : Install containerd package ---------------------- 8.55s 2026-03-26 02:08:58.456425 | orchestrator | osism.services.docker : Add repository ---------------------------------- 8.36s 2026-03-26 02:08:58.456446 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 8.29s 2026-03-26 02:08:58.456467 | orchestrator | osism.services.rng : Install rng package -------------------------------- 8.26s 2026-03-26 02:08:58.456488 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 8.10s 2026-03-26 02:08:58.456507 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.98s 2026-03-26 02:08:58.456527 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.69s 2026-03-26 02:08:58.456545 | orchestrator | osism.commons.services : Populate service facts ------------------------- 6.60s 2026-03-26 02:08:58.456564 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 6.10s 2026-03-26 02:08:58.456583 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 5.63s 2026-03-26 02:08:58.831211 | orchestrator | + osism apply fail2ban 2026-03-26 02:09:12.048426 | orchestrator | 2026-03-26 02:09:12 | INFO  | Task fcfdf61c-4c97-405f-bf69-c9730546e312 (fail2ban) was prepared for execution. 2026-03-26 02:09:12.048515 | orchestrator | 2026-03-26 02:09:12 | INFO  | It takes a moment until task fcfdf61c-4c97-405f-bf69-c9730546e312 (fail2ban) has been started and output is visible here. 2026-03-26 02:09:34.670729 | orchestrator | 2026-03-26 02:09:34.670808 | orchestrator | PLAY [Apply role fail2ban] ***************************************************** 2026-03-26 02:09:34.670816 | orchestrator | 2026-03-26 02:09:34.670821 | orchestrator | TASK [osism.services.fail2ban : Include distribution specific install tasks] *** 2026-03-26 02:09:34.670827 | orchestrator | Thursday 26 March 2026 02:09:17 +0000 (0:00:00.287) 0:00:00.287 ******** 2026-03-26 02:09:34.670833 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/fail2ban/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-26 02:09:34.670839 | orchestrator | 2026-03-26 02:09:34.670844 | orchestrator | TASK [osism.services.fail2ban : Install fail2ban package] ********************** 2026-03-26 02:09:34.670848 | orchestrator | Thursday 26 March 2026 02:09:18 +0000 (0:00:01.216) 0:00:01.504 ******** 2026-03-26 02:09:34.670852 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:09:34.670859 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:09:34.670863 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:09:34.670867 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:09:34.670872 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:09:34.670876 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:09:34.670880 | orchestrator | changed: [testbed-manager] 2026-03-26 02:09:34.670885 | orchestrator | 2026-03-26 02:09:34.670890 | orchestrator | TASK [osism.services.fail2ban : Copy configuration files] ********************** 2026-03-26 02:09:34.670894 | orchestrator | Thursday 26 March 2026 02:09:29 +0000 (0:00:11.356) 0:00:12.860 ******** 2026-03-26 02:09:34.670898 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:09:34.670903 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:09:34.670907 | orchestrator | changed: [testbed-manager] 2026-03-26 02:09:34.670912 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:09:34.670916 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:09:34.670920 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:09:34.670924 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:09:34.670929 | orchestrator | 2026-03-26 02:09:34.670933 | orchestrator | TASK [osism.services.fail2ban : Manage fail2ban service] *********************** 2026-03-26 02:09:34.670937 | orchestrator | Thursday 26 March 2026 02:09:31 +0000 (0:00:01.453) 0:00:14.313 ******** 2026-03-26 02:09:34.670942 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:09:34.670947 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:09:34.670952 | orchestrator | ok: [testbed-manager] 2026-03-26 02:09:34.670956 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:09:34.670960 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:09:34.670965 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:09:34.670969 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:09:34.670973 | orchestrator | 2026-03-26 02:09:34.670978 | orchestrator | TASK [osism.services.fail2ban : Reload fail2ban configuration] ***************** 2026-03-26 02:09:34.670982 | orchestrator | Thursday 26 March 2026 02:09:32 +0000 (0:00:01.479) 0:00:15.793 ******** 2026-03-26 02:09:34.670986 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:09:34.670991 | orchestrator | changed: [testbed-manager] 2026-03-26 02:09:34.670995 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:09:34.671000 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:09:34.671004 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:09:34.671008 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:09:34.671013 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:09:34.671017 | orchestrator | 2026-03-26 02:09:34.671021 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-26 02:09:34.671026 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-26 02:09:34.671050 | orchestrator | testbed-node-0 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-26 02:09:34.671055 | orchestrator | testbed-node-1 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-26 02:09:34.671059 | orchestrator | testbed-node-2 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-26 02:09:34.671063 | orchestrator | testbed-node-3 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-26 02:09:34.671068 | orchestrator | testbed-node-4 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-26 02:09:34.671072 | orchestrator | testbed-node-5 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-26 02:09:34.671076 | orchestrator | 2026-03-26 02:09:34.671081 | orchestrator | 2026-03-26 02:09:34.671085 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-26 02:09:34.671089 | orchestrator | Thursday 26 March 2026 02:09:34 +0000 (0:00:01.671) 0:00:17.465 ******** 2026-03-26 02:09:34.671094 | orchestrator | =============================================================================== 2026-03-26 02:09:34.671098 | orchestrator | osism.services.fail2ban : Install fail2ban package --------------------- 11.36s 2026-03-26 02:09:34.671102 | orchestrator | osism.services.fail2ban : Reload fail2ban configuration ----------------- 1.67s 2026-03-26 02:09:34.671107 | orchestrator | osism.services.fail2ban : Manage fail2ban service ----------------------- 1.48s 2026-03-26 02:09:34.671111 | orchestrator | osism.services.fail2ban : Copy configuration files ---------------------- 1.45s 2026-03-26 02:09:34.671116 | orchestrator | osism.services.fail2ban : Include distribution specific install tasks --- 1.22s 2026-03-26 02:09:35.003323 | orchestrator | + [[ -e /etc/redhat-release ]] 2026-03-26 02:09:35.003496 | orchestrator | + osism apply network 2026-03-26 02:09:47.100673 | orchestrator | 2026-03-26 02:09:47 | INFO  | Task 3d321b30-b3f8-4748-8c86-c02309be4909 (network) was prepared for execution. 2026-03-26 02:09:47.100791 | orchestrator | 2026-03-26 02:09:47 | INFO  | It takes a moment until task 3d321b30-b3f8-4748-8c86-c02309be4909 (network) has been started and output is visible here. 2026-03-26 02:10:18.248303 | orchestrator | 2026-03-26 02:10:18.248492 | orchestrator | PLAY [Apply role network] ****************************************************** 2026-03-26 02:10:18.248510 | orchestrator | 2026-03-26 02:10:18.248521 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2026-03-26 02:10:18.248530 | orchestrator | Thursday 26 March 2026 02:09:52 +0000 (0:00:00.294) 0:00:00.294 ******** 2026-03-26 02:10:18.248540 | orchestrator | ok: [testbed-manager] 2026-03-26 02:10:18.248550 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:10:18.248559 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:10:18.248568 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:10:18.248577 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:10:18.248585 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:10:18.248594 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:10:18.248602 | orchestrator | 2026-03-26 02:10:18.248611 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2026-03-26 02:10:18.248620 | orchestrator | Thursday 26 March 2026 02:09:52 +0000 (0:00:00.779) 0:00:01.073 ******** 2026-03-26 02:10:18.248630 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-26 02:10:18.248641 | orchestrator | 2026-03-26 02:10:18.248650 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2026-03-26 02:10:18.248682 | orchestrator | Thursday 26 March 2026 02:09:54 +0000 (0:00:01.368) 0:00:02.442 ******** 2026-03-26 02:10:18.248692 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:10:18.248701 | orchestrator | ok: [testbed-manager] 2026-03-26 02:10:18.248709 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:10:18.248717 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:10:18.248726 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:10:18.248734 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:10:18.248742 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:10:18.248751 | orchestrator | 2026-03-26 02:10:18.248760 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2026-03-26 02:10:18.248768 | orchestrator | Thursday 26 March 2026 02:09:56 +0000 (0:00:02.009) 0:00:04.452 ******** 2026-03-26 02:10:18.248777 | orchestrator | ok: [testbed-manager] 2026-03-26 02:10:18.248785 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:10:18.248795 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:10:18.248803 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:10:18.248812 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:10:18.248820 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:10:18.248828 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:10:18.248837 | orchestrator | 2026-03-26 02:10:18.248846 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2026-03-26 02:10:18.248856 | orchestrator | Thursday 26 March 2026 02:09:58 +0000 (0:00:01.785) 0:00:06.237 ******** 2026-03-26 02:10:18.248866 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2026-03-26 02:10:18.248876 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2026-03-26 02:10:18.248886 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2026-03-26 02:10:18.248896 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2026-03-26 02:10:18.248905 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2026-03-26 02:10:18.248915 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2026-03-26 02:10:18.248925 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2026-03-26 02:10:18.248935 | orchestrator | 2026-03-26 02:10:18.248962 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2026-03-26 02:10:18.248976 | orchestrator | Thursday 26 March 2026 02:09:59 +0000 (0:00:00.991) 0:00:07.229 ******** 2026-03-26 02:10:18.248987 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-26 02:10:18.248998 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-26 02:10:18.249008 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-26 02:10:18.249017 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-26 02:10:18.249027 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-26 02:10:18.249036 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-26 02:10:18.249046 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-26 02:10:18.249056 | orchestrator | 2026-03-26 02:10:18.249066 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2026-03-26 02:10:18.249076 | orchestrator | Thursday 26 March 2026 02:10:03 +0000 (0:00:03.990) 0:00:11.220 ******** 2026-03-26 02:10:18.249085 | orchestrator | changed: [testbed-manager] 2026-03-26 02:10:18.249095 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:10:18.249105 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:10:18.249115 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:10:18.249124 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:10:18.249134 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:10:18.249143 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:10:18.249154 | orchestrator | 2026-03-26 02:10:18.249164 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2026-03-26 02:10:18.249174 | orchestrator | Thursday 26 March 2026 02:10:04 +0000 (0:00:01.673) 0:00:12.894 ******** 2026-03-26 02:10:18.249184 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-26 02:10:18.249193 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-26 02:10:18.249204 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-26 02:10:18.249214 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-26 02:10:18.249229 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-26 02:10:18.249238 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-26 02:10:18.249246 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-26 02:10:18.249255 | orchestrator | 2026-03-26 02:10:18.249264 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2026-03-26 02:10:18.249273 | orchestrator | Thursday 26 March 2026 02:10:06 +0000 (0:00:01.953) 0:00:14.847 ******** 2026-03-26 02:10:18.249281 | orchestrator | ok: [testbed-manager] 2026-03-26 02:10:18.249290 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:10:18.249299 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:10:18.249307 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:10:18.249316 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:10:18.249325 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:10:18.249333 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:10:18.249342 | orchestrator | 2026-03-26 02:10:18.249351 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2026-03-26 02:10:18.249374 | orchestrator | Thursday 26 March 2026 02:10:07 +0000 (0:00:01.259) 0:00:16.107 ******** 2026-03-26 02:10:18.249383 | orchestrator | skipping: [testbed-manager] 2026-03-26 02:10:18.249392 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:10:18.249423 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:10:18.249433 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:10:18.249442 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:10:18.249450 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:10:18.249459 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:10:18.249467 | orchestrator | 2026-03-26 02:10:18.249476 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2026-03-26 02:10:18.249485 | orchestrator | Thursday 26 March 2026 02:10:08 +0000 (0:00:00.764) 0:00:16.871 ******** 2026-03-26 02:10:18.249499 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:10:18.249514 | orchestrator | ok: [testbed-manager] 2026-03-26 02:10:18.249528 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:10:18.249541 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:10:18.249555 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:10:18.249568 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:10:18.249581 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:10:18.249596 | orchestrator | 2026-03-26 02:10:18.249612 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2026-03-26 02:10:18.249627 | orchestrator | Thursday 26 March 2026 02:10:10 +0000 (0:00:02.186) 0:00:19.058 ******** 2026-03-26 02:10:18.249644 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:10:18.249653 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:10:18.249662 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:10:18.249670 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:10:18.249679 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:10:18.249687 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:10:18.249697 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2026-03-26 02:10:18.249707 | orchestrator | 2026-03-26 02:10:18.249715 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2026-03-26 02:10:18.249724 | orchestrator | Thursday 26 March 2026 02:10:11 +0000 (0:00:01.044) 0:00:20.102 ******** 2026-03-26 02:10:18.249733 | orchestrator | ok: [testbed-manager] 2026-03-26 02:10:18.249741 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:10:18.249750 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:10:18.249758 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:10:18.249767 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:10:18.249775 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:10:18.249784 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:10:18.249792 | orchestrator | 2026-03-26 02:10:18.249801 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2026-03-26 02:10:18.249809 | orchestrator | Thursday 26 March 2026 02:10:13 +0000 (0:00:01.758) 0:00:21.860 ******** 2026-03-26 02:10:18.249818 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-26 02:10:18.249837 | orchestrator | 2026-03-26 02:10:18.249846 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-03-26 02:10:18.249854 | orchestrator | Thursday 26 March 2026 02:10:15 +0000 (0:00:01.358) 0:00:23.218 ******** 2026-03-26 02:10:18.249863 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:10:18.249871 | orchestrator | ok: [testbed-manager] 2026-03-26 02:10:18.249880 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:10:18.249888 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:10:18.249902 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:10:18.249911 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:10:18.249919 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:10:18.249927 | orchestrator | 2026-03-26 02:10:18.249936 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2026-03-26 02:10:18.249944 | orchestrator | Thursday 26 March 2026 02:10:16 +0000 (0:00:01.181) 0:00:24.400 ******** 2026-03-26 02:10:18.249953 | orchestrator | ok: [testbed-manager] 2026-03-26 02:10:18.249961 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:10:18.249970 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:10:18.249978 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:10:18.249987 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:10:18.249995 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:10:18.250004 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:10:18.250012 | orchestrator | 2026-03-26 02:10:18.250077 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-03-26 02:10:18.250086 | orchestrator | Thursday 26 March 2026 02:10:16 +0000 (0:00:00.713) 0:00:25.113 ******** 2026-03-26 02:10:18.250095 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2026-03-26 02:10:18.250104 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2026-03-26 02:10:18.250112 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2026-03-26 02:10:18.250121 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2026-03-26 02:10:18.250129 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-26 02:10:18.250138 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2026-03-26 02:10:18.250146 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-26 02:10:18.250155 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2026-03-26 02:10:18.250163 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-26 02:10:18.250172 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-26 02:10:18.250180 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-26 02:10:18.250189 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2026-03-26 02:10:18.250197 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-26 02:10:18.250210 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-26 02:10:18.250225 | orchestrator | 2026-03-26 02:10:18.250249 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2026-03-26 02:10:38.488252 | orchestrator | Thursday 26 March 2026 02:10:18 +0000 (0:00:01.255) 0:00:26.368 ******** 2026-03-26 02:10:38.488348 | orchestrator | skipping: [testbed-manager] 2026-03-26 02:10:38.488360 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:10:38.488367 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:10:38.488374 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:10:38.488381 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:10:38.488385 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:10:38.488389 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:10:38.488393 | orchestrator | 2026-03-26 02:10:38.488421 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2026-03-26 02:10:38.488481 | orchestrator | Thursday 26 March 2026 02:10:18 +0000 (0:00:00.679) 0:00:27.048 ******** 2026-03-26 02:10:38.488487 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-manager, testbed-node-3, testbed-node-2, testbed-node-4, testbed-node-1, testbed-node-0, testbed-node-5 2026-03-26 02:10:38.488494 | orchestrator | 2026-03-26 02:10:38.488498 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2026-03-26 02:10:38.488502 | orchestrator | Thursday 26 March 2026 02:10:25 +0000 (0:00:06.448) 0:00:33.497 ******** 2026-03-26 02:10:38.488507 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-03-26 02:10:38.488511 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-03-26 02:10:38.488516 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-03-26 02:10:38.488521 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-03-26 02:10:38.488528 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-03-26 02:10:38.488549 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-03-26 02:10:38.488560 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-03-26 02:10:38.488566 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-03-26 02:10:38.488577 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-03-26 02:10:38.488583 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-03-26 02:10:38.488591 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-03-26 02:10:38.488610 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-03-26 02:10:38.488625 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-03-26 02:10:38.488631 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-03-26 02:10:38.488638 | orchestrator | 2026-03-26 02:10:38.488644 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2026-03-26 02:10:38.488651 | orchestrator | Thursday 26 March 2026 02:10:32 +0000 (0:00:06.877) 0:00:40.374 ******** 2026-03-26 02:10:38.488657 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-03-26 02:10:38.488664 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-03-26 02:10:38.488671 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-03-26 02:10:38.488677 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-03-26 02:10:38.488682 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-03-26 02:10:38.488690 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-03-26 02:10:38.488694 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-03-26 02:10:38.488697 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-03-26 02:10:38.488701 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-03-26 02:10:38.488705 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-03-26 02:10:38.488709 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-03-26 02:10:38.488717 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-03-26 02:10:38.488728 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-03-26 02:10:45.982348 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-03-26 02:10:45.982515 | orchestrator | 2026-03-26 02:10:45.982530 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2026-03-26 02:10:45.982539 | orchestrator | Thursday 26 March 2026 02:10:38 +0000 (0:00:06.228) 0:00:46.602 ******** 2026-03-26 02:10:45.982549 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-26 02:10:45.982557 | orchestrator | 2026-03-26 02:10:45.982568 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-03-26 02:10:45.982580 | orchestrator | Thursday 26 March 2026 02:10:39 +0000 (0:00:01.442) 0:00:48.045 ******** 2026-03-26 02:10:45.982591 | orchestrator | ok: [testbed-manager] 2026-03-26 02:10:45.982600 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:10:45.982607 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:10:45.982615 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:10:45.982622 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:10:45.982629 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:10:45.982636 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:10:45.982644 | orchestrator | 2026-03-26 02:10:45.982651 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-03-26 02:10:45.982659 | orchestrator | Thursday 26 March 2026 02:10:41 +0000 (0:00:01.441) 0:00:49.487 ******** 2026-03-26 02:10:45.982666 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-26 02:10:45.982674 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-26 02:10:45.982681 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-26 02:10:45.982687 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-26 02:10:45.982694 | orchestrator | skipping: [testbed-manager] 2026-03-26 02:10:45.982702 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-26 02:10:45.982708 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-26 02:10:45.982715 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-26 02:10:45.982722 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-26 02:10:45.982728 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:10:45.982735 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-26 02:10:45.982757 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-26 02:10:45.982764 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-26 02:10:45.982771 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-26 02:10:45.982793 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:10:45.982800 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-26 02:10:45.982807 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-26 02:10:45.982813 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-26 02:10:45.982820 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-26 02:10:45.982827 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:10:45.982834 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-26 02:10:45.982841 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-26 02:10:45.982847 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-26 02:10:45.982854 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-26 02:10:45.982860 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:10:45.982867 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-26 02:10:45.982874 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-26 02:10:45.982880 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-26 02:10:45.982889 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-26 02:10:45.982900 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:10:45.982907 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-26 02:10:45.982914 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-26 02:10:45.982921 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-26 02:10:45.982927 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-26 02:10:45.982934 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:10:45.982941 | orchestrator | 2026-03-26 02:10:45.982947 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2026-03-26 02:10:45.982968 | orchestrator | Thursday 26 March 2026 02:10:44 +0000 (0:00:02.662) 0:00:52.150 ******** 2026-03-26 02:10:45.982975 | orchestrator | skipping: [testbed-manager] 2026-03-26 02:10:45.982982 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:10:45.982990 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:10:45.983001 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:10:45.983012 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:10:45.983023 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:10:45.983033 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:10:45.983047 | orchestrator | 2026-03-26 02:10:45.983062 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2026-03-26 02:10:45.983071 | orchestrator | Thursday 26 March 2026 02:10:44 +0000 (0:00:00.724) 0:00:52.874 ******** 2026-03-26 02:10:45.983081 | orchestrator | skipping: [testbed-manager] 2026-03-26 02:10:45.983091 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:10:45.983101 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:10:45.983190 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:10:45.983207 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:10:45.983218 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:10:45.983229 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:10:45.983240 | orchestrator | 2026-03-26 02:10:45.983251 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-26 02:10:45.983263 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-26 02:10:45.983275 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-26 02:10:45.983292 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-26 02:10:45.983299 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-26 02:10:45.983306 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-26 02:10:45.983313 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-26 02:10:45.983319 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-26 02:10:45.983326 | orchestrator | 2026-03-26 02:10:45.983332 | orchestrator | 2026-03-26 02:10:45.983339 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-26 02:10:45.983346 | orchestrator | Thursday 26 March 2026 02:10:45 +0000 (0:00:00.786) 0:00:53.661 ******** 2026-03-26 02:10:45.983358 | orchestrator | =============================================================================== 2026-03-26 02:10:45.983365 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 6.88s 2026-03-26 02:10:45.983372 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 6.45s 2026-03-26 02:10:45.983379 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 6.23s 2026-03-26 02:10:45.983385 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.99s 2026-03-26 02:10:45.983392 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 2.66s 2026-03-26 02:10:45.983398 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.19s 2026-03-26 02:10:45.983405 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.01s 2026-03-26 02:10:45.983411 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.95s 2026-03-26 02:10:45.983418 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.79s 2026-03-26 02:10:45.983424 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.76s 2026-03-26 02:10:45.983431 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.67s 2026-03-26 02:10:45.983511 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.44s 2026-03-26 02:10:45.983519 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.44s 2026-03-26 02:10:45.983526 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.37s 2026-03-26 02:10:45.983533 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.36s 2026-03-26 02:10:45.983540 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.26s 2026-03-26 02:10:45.983547 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.26s 2026-03-26 02:10:45.983555 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.18s 2026-03-26 02:10:45.983562 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 1.04s 2026-03-26 02:10:45.983569 | orchestrator | osism.commons.network : Create required directories --------------------- 0.99s 2026-03-26 02:10:46.345090 | orchestrator | + osism apply wireguard 2026-03-26 02:10:58.609376 | orchestrator | 2026-03-26 02:10:58 | INFO  | Task d2213621-6efb-4be0-b77b-16b9f0ccce98 (wireguard) was prepared for execution. 2026-03-26 02:10:58.609545 | orchestrator | 2026-03-26 02:10:58 | INFO  | It takes a moment until task d2213621-6efb-4be0-b77b-16b9f0ccce98 (wireguard) has been started and output is visible here. 2026-03-26 02:11:20.491947 | orchestrator | 2026-03-26 02:11:20.492065 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2026-03-26 02:11:20.492108 | orchestrator | 2026-03-26 02:11:20.492116 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2026-03-26 02:11:20.492125 | orchestrator | Thursday 26 March 2026 02:11:03 +0000 (0:00:00.253) 0:00:00.253 ******** 2026-03-26 02:11:20.492133 | orchestrator | ok: [testbed-manager] 2026-03-26 02:11:20.492142 | orchestrator | 2026-03-26 02:11:20.492150 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2026-03-26 02:11:20.492158 | orchestrator | Thursday 26 March 2026 02:11:05 +0000 (0:00:01.799) 0:00:02.053 ******** 2026-03-26 02:11:20.492166 | orchestrator | changed: [testbed-manager] 2026-03-26 02:11:20.492179 | orchestrator | 2026-03-26 02:11:20.492187 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2026-03-26 02:11:20.492195 | orchestrator | Thursday 26 March 2026 02:11:12 +0000 (0:00:07.246) 0:00:09.300 ******** 2026-03-26 02:11:20.492202 | orchestrator | changed: [testbed-manager] 2026-03-26 02:11:20.492209 | orchestrator | 2026-03-26 02:11:20.492218 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2026-03-26 02:11:20.492226 | orchestrator | Thursday 26 March 2026 02:11:12 +0000 (0:00:00.571) 0:00:09.872 ******** 2026-03-26 02:11:20.492233 | orchestrator | changed: [testbed-manager] 2026-03-26 02:11:20.492241 | orchestrator | 2026-03-26 02:11:20.492249 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2026-03-26 02:11:20.492257 | orchestrator | Thursday 26 March 2026 02:11:13 +0000 (0:00:00.497) 0:00:10.370 ******** 2026-03-26 02:11:20.492264 | orchestrator | ok: [testbed-manager] 2026-03-26 02:11:20.492272 | orchestrator | 2026-03-26 02:11:20.492280 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2026-03-26 02:11:20.492287 | orchestrator | Thursday 26 March 2026 02:11:14 +0000 (0:00:00.790) 0:00:11.160 ******** 2026-03-26 02:11:20.492294 | orchestrator | ok: [testbed-manager] 2026-03-26 02:11:20.492301 | orchestrator | 2026-03-26 02:11:20.492309 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2026-03-26 02:11:20.492317 | orchestrator | Thursday 26 March 2026 02:11:14 +0000 (0:00:00.423) 0:00:11.583 ******** 2026-03-26 02:11:20.492325 | orchestrator | ok: [testbed-manager] 2026-03-26 02:11:20.492332 | orchestrator | 2026-03-26 02:11:20.492340 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2026-03-26 02:11:20.492348 | orchestrator | Thursday 26 March 2026 02:11:15 +0000 (0:00:00.423) 0:00:12.007 ******** 2026-03-26 02:11:20.492355 | orchestrator | changed: [testbed-manager] 2026-03-26 02:11:20.492362 | orchestrator | 2026-03-26 02:11:20.492370 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2026-03-26 02:11:20.492377 | orchestrator | Thursday 26 March 2026 02:11:16 +0000 (0:00:01.252) 0:00:13.259 ******** 2026-03-26 02:11:20.492385 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-26 02:11:20.492393 | orchestrator | changed: [testbed-manager] 2026-03-26 02:11:20.492400 | orchestrator | 2026-03-26 02:11:20.492407 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2026-03-26 02:11:20.492415 | orchestrator | Thursday 26 March 2026 02:11:17 +0000 (0:00:00.946) 0:00:14.206 ******** 2026-03-26 02:11:20.492422 | orchestrator | changed: [testbed-manager] 2026-03-26 02:11:20.492430 | orchestrator | 2026-03-26 02:11:20.492439 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2026-03-26 02:11:20.492447 | orchestrator | Thursday 26 March 2026 02:11:19 +0000 (0:00:01.789) 0:00:15.996 ******** 2026-03-26 02:11:20.492454 | orchestrator | changed: [testbed-manager] 2026-03-26 02:11:20.492462 | orchestrator | 2026-03-26 02:11:20.492470 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-26 02:11:20.492543 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-26 02:11:20.492554 | orchestrator | 2026-03-26 02:11:20.492563 | orchestrator | 2026-03-26 02:11:20.492571 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-26 02:11:20.492592 | orchestrator | Thursday 26 March 2026 02:11:20 +0000 (0:00:00.949) 0:00:16.946 ******** 2026-03-26 02:11:20.492600 | orchestrator | =============================================================================== 2026-03-26 02:11:20.492609 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 7.25s 2026-03-26 02:11:20.492618 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.80s 2026-03-26 02:11:20.492624 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.79s 2026-03-26 02:11:20.492630 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.25s 2026-03-26 02:11:20.492636 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.95s 2026-03-26 02:11:20.492642 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.95s 2026-03-26 02:11:20.492647 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.79s 2026-03-26 02:11:20.492653 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.57s 2026-03-26 02:11:20.492658 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.50s 2026-03-26 02:11:20.492664 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.42s 2026-03-26 02:11:20.492669 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.42s 2026-03-26 02:11:20.834809 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2026-03-26 02:11:20.868028 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2026-03-26 02:11:20.868148 | orchestrator | Dload Upload Total Spent Left Speed 2026-03-26 02:11:20.951068 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 14 100 14 0 0 168 0 --:--:-- --:--:-- --:--:-- 170 2026-03-26 02:11:20.966259 | orchestrator | + osism apply --environment custom workarounds 2026-03-26 02:11:23.062172 | orchestrator | 2026-03-26 02:11:23 | INFO  | Trying to run play workarounds in environment custom 2026-03-26 02:11:33.263775 | orchestrator | 2026-03-26 02:11:33 | INFO  | Task 02ff3201-de62-47b0-bff2-96101897c72a (workarounds) was prepared for execution. 2026-03-26 02:11:33.263897 | orchestrator | 2026-03-26 02:11:33 | INFO  | It takes a moment until task 02ff3201-de62-47b0-bff2-96101897c72a (workarounds) has been started and output is visible here. 2026-03-26 02:12:00.022609 | orchestrator | 2026-03-26 02:12:00.022756 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-26 02:12:00.022773 | orchestrator | 2026-03-26 02:12:00.022784 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2026-03-26 02:12:00.022795 | orchestrator | Thursday 26 March 2026 02:11:37 +0000 (0:00:00.131) 0:00:00.131 ******** 2026-03-26 02:12:00.022805 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2026-03-26 02:12:00.022817 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2026-03-26 02:12:00.022827 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2026-03-26 02:12:00.022837 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2026-03-26 02:12:00.022846 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2026-03-26 02:12:00.022856 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2026-03-26 02:12:00.022865 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2026-03-26 02:12:00.022875 | orchestrator | 2026-03-26 02:12:00.022884 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2026-03-26 02:12:00.022894 | orchestrator | 2026-03-26 02:12:00.022903 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-03-26 02:12:00.022913 | orchestrator | Thursday 26 March 2026 02:11:38 +0000 (0:00:00.840) 0:00:00.972 ******** 2026-03-26 02:12:00.022923 | orchestrator | ok: [testbed-manager] 2026-03-26 02:12:00.022985 | orchestrator | 2026-03-26 02:12:00.023005 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2026-03-26 02:12:00.023021 | orchestrator | 2026-03-26 02:12:00.023037 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-03-26 02:12:00.023054 | orchestrator | Thursday 26 March 2026 02:11:41 +0000 (0:00:02.657) 0:00:03.629 ******** 2026-03-26 02:12:00.023071 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:12:00.023087 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:12:00.023101 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:12:00.023116 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:12:00.023131 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:12:00.023146 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:12:00.023162 | orchestrator | 2026-03-26 02:12:00.023178 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2026-03-26 02:12:00.023193 | orchestrator | 2026-03-26 02:12:00.023209 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2026-03-26 02:12:00.023246 | orchestrator | Thursday 26 March 2026 02:11:43 +0000 (0:00:01.868) 0:00:05.497 ******** 2026-03-26 02:12:00.023264 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-26 02:12:00.023284 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-26 02:12:00.023305 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-26 02:12:00.023323 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-26 02:12:00.023342 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-26 02:12:00.023374 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-26 02:12:00.023404 | orchestrator | 2026-03-26 02:12:00.023420 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2026-03-26 02:12:00.023436 | orchestrator | Thursday 26 March 2026 02:11:44 +0000 (0:00:01.573) 0:00:07.071 ******** 2026-03-26 02:12:00.023451 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:12:00.023465 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:12:00.023478 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:12:00.023495 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:12:00.023510 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:12:00.023552 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:12:00.023569 | orchestrator | 2026-03-26 02:12:00.023586 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2026-03-26 02:12:00.023602 | orchestrator | Thursday 26 March 2026 02:11:48 +0000 (0:00:03.701) 0:00:10.773 ******** 2026-03-26 02:12:00.023614 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:12:00.023624 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:12:00.023634 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:12:00.023644 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:12:00.023653 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:12:00.023663 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:12:00.023672 | orchestrator | 2026-03-26 02:12:00.023681 | orchestrator | PLAY [Add a workaround service] ************************************************ 2026-03-26 02:12:00.023691 | orchestrator | 2026-03-26 02:12:00.023701 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2026-03-26 02:12:00.023710 | orchestrator | Thursday 26 March 2026 02:11:49 +0000 (0:00:00.802) 0:00:11.575 ******** 2026-03-26 02:12:00.023719 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:12:00.023729 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:12:00.023738 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:12:00.023748 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:12:00.023757 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:12:00.023766 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:12:00.023789 | orchestrator | changed: [testbed-manager] 2026-03-26 02:12:00.023799 | orchestrator | 2026-03-26 02:12:00.023808 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2026-03-26 02:12:00.023818 | orchestrator | Thursday 26 March 2026 02:11:50 +0000 (0:00:01.725) 0:00:13.300 ******** 2026-03-26 02:12:00.023827 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:12:00.023837 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:12:00.023846 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:12:00.023855 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:12:00.023865 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:12:00.023874 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:12:00.023907 | orchestrator | changed: [testbed-manager] 2026-03-26 02:12:00.023917 | orchestrator | 2026-03-26 02:12:00.023927 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2026-03-26 02:12:00.023937 | orchestrator | Thursday 26 March 2026 02:11:52 +0000 (0:00:01.832) 0:00:15.133 ******** 2026-03-26 02:12:00.023946 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:12:00.023956 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:12:00.023965 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:12:00.023975 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:12:00.023984 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:12:00.023993 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:12:00.024003 | orchestrator | ok: [testbed-manager] 2026-03-26 02:12:00.024012 | orchestrator | 2026-03-26 02:12:00.024022 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2026-03-26 02:12:00.024031 | orchestrator | Thursday 26 March 2026 02:11:54 +0000 (0:00:01.766) 0:00:16.900 ******** 2026-03-26 02:12:00.024041 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:12:00.024051 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:12:00.024060 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:12:00.024069 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:12:00.024079 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:12:00.024088 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:12:00.024097 | orchestrator | changed: [testbed-manager] 2026-03-26 02:12:00.024107 | orchestrator | 2026-03-26 02:12:00.024116 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2026-03-26 02:12:00.024126 | orchestrator | Thursday 26 March 2026 02:11:56 +0000 (0:00:02.071) 0:00:18.971 ******** 2026-03-26 02:12:00.024135 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:12:00.024145 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:12:00.024154 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:12:00.024164 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:12:00.024173 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:12:00.024182 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:12:00.024192 | orchestrator | skipping: [testbed-manager] 2026-03-26 02:12:00.024201 | orchestrator | 2026-03-26 02:12:00.024211 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2026-03-26 02:12:00.024220 | orchestrator | 2026-03-26 02:12:00.024230 | orchestrator | TASK [Install python3-docker] ************************************************** 2026-03-26 02:12:00.024239 | orchestrator | Thursday 26 March 2026 02:11:57 +0000 (0:00:00.679) 0:00:19.650 ******** 2026-03-26 02:12:00.024249 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:12:00.024258 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:12:00.024268 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:12:00.024277 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:12:00.024286 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:12:00.024304 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:12:00.024314 | orchestrator | ok: [testbed-manager] 2026-03-26 02:12:00.024323 | orchestrator | 2026-03-26 02:12:00.024333 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-26 02:12:00.024344 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-26 02:12:00.024356 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-26 02:12:00.024373 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-26 02:12:00.024382 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-26 02:12:00.024392 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-26 02:12:00.024408 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-26 02:12:00.024425 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-26 02:12:00.024452 | orchestrator | 2026-03-26 02:12:00.024468 | orchestrator | 2026-03-26 02:12:00.024483 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-26 02:12:00.024498 | orchestrator | Thursday 26 March 2026 02:11:59 +0000 (0:00:02.830) 0:00:22.481 ******** 2026-03-26 02:12:00.024513 | orchestrator | =============================================================================== 2026-03-26 02:12:00.024552 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.70s 2026-03-26 02:12:00.024569 | orchestrator | Install python3-docker -------------------------------------------------- 2.83s 2026-03-26 02:12:00.024585 | orchestrator | Apply netplan configuration --------------------------------------------- 2.66s 2026-03-26 02:12:00.024601 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 2.07s 2026-03-26 02:12:00.024618 | orchestrator | Apply netplan configuration --------------------------------------------- 1.87s 2026-03-26 02:12:00.024635 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.83s 2026-03-26 02:12:00.024650 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.77s 2026-03-26 02:12:00.024666 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.73s 2026-03-26 02:12:00.024683 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.57s 2026-03-26 02:12:00.024700 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.84s 2026-03-26 02:12:00.024715 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.80s 2026-03-26 02:12:00.024742 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.68s 2026-03-26 02:12:00.827145 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2026-03-26 02:12:13.132123 | orchestrator | 2026-03-26 02:12:13 | INFO  | Task 82c5765b-791d-4916-9564-ec8dce0bd1ed (reboot) was prepared for execution. 2026-03-26 02:12:13.132218 | orchestrator | 2026-03-26 02:12:13 | INFO  | It takes a moment until task 82c5765b-791d-4916-9564-ec8dce0bd1ed (reboot) has been started and output is visible here. 2026-03-26 02:12:23.822770 | orchestrator | 2026-03-26 02:12:23.822896 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-26 02:12:23.822922 | orchestrator | 2026-03-26 02:12:23.822942 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-26 02:12:23.822961 | orchestrator | Thursday 26 March 2026 02:12:17 +0000 (0:00:00.208) 0:00:00.208 ******** 2026-03-26 02:12:23.822981 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:12:23.823000 | orchestrator | 2026-03-26 02:12:23.823018 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-26 02:12:23.823035 | orchestrator | Thursday 26 March 2026 02:12:17 +0000 (0:00:00.118) 0:00:00.326 ******** 2026-03-26 02:12:23.823054 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:12:23.823072 | orchestrator | 2026-03-26 02:12:23.823091 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-26 02:12:23.823145 | orchestrator | Thursday 26 March 2026 02:12:18 +0000 (0:00:00.949) 0:00:01.276 ******** 2026-03-26 02:12:23.823163 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:12:23.823180 | orchestrator | 2026-03-26 02:12:23.823197 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-26 02:12:23.823214 | orchestrator | 2026-03-26 02:12:23.823232 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-26 02:12:23.823251 | orchestrator | Thursday 26 March 2026 02:12:18 +0000 (0:00:00.120) 0:00:01.396 ******** 2026-03-26 02:12:23.823268 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:12:23.823285 | orchestrator | 2026-03-26 02:12:23.823304 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-26 02:12:23.823324 | orchestrator | Thursday 26 March 2026 02:12:18 +0000 (0:00:00.113) 0:00:01.510 ******** 2026-03-26 02:12:23.823342 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:12:23.823361 | orchestrator | 2026-03-26 02:12:23.823380 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-26 02:12:23.823418 | orchestrator | Thursday 26 March 2026 02:12:19 +0000 (0:00:00.679) 0:00:02.189 ******** 2026-03-26 02:12:23.823439 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:12:23.823458 | orchestrator | 2026-03-26 02:12:23.823478 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-26 02:12:23.823498 | orchestrator | 2026-03-26 02:12:23.823518 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-26 02:12:23.823538 | orchestrator | Thursday 26 March 2026 02:12:19 +0000 (0:00:00.117) 0:00:02.306 ******** 2026-03-26 02:12:23.823596 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:12:23.823616 | orchestrator | 2026-03-26 02:12:23.823635 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-26 02:12:23.823653 | orchestrator | Thursday 26 March 2026 02:12:19 +0000 (0:00:00.220) 0:00:02.527 ******** 2026-03-26 02:12:23.823672 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:12:23.823691 | orchestrator | 2026-03-26 02:12:23.823712 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-26 02:12:23.823732 | orchestrator | Thursday 26 March 2026 02:12:20 +0000 (0:00:00.675) 0:00:03.203 ******** 2026-03-26 02:12:23.823750 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:12:23.823768 | orchestrator | 2026-03-26 02:12:23.823786 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-26 02:12:23.823804 | orchestrator | 2026-03-26 02:12:23.823822 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-26 02:12:23.823841 | orchestrator | Thursday 26 March 2026 02:12:20 +0000 (0:00:00.117) 0:00:03.321 ******** 2026-03-26 02:12:23.823858 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:12:23.823877 | orchestrator | 2026-03-26 02:12:23.823894 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-26 02:12:23.823912 | orchestrator | Thursday 26 March 2026 02:12:20 +0000 (0:00:00.119) 0:00:03.440 ******** 2026-03-26 02:12:23.823932 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:12:23.823951 | orchestrator | 2026-03-26 02:12:23.823970 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-26 02:12:23.823989 | orchestrator | Thursday 26 March 2026 02:12:21 +0000 (0:00:00.672) 0:00:04.112 ******** 2026-03-26 02:12:23.824007 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:12:23.824026 | orchestrator | 2026-03-26 02:12:23.824044 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-26 02:12:23.824062 | orchestrator | 2026-03-26 02:12:23.824079 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-26 02:12:23.824097 | orchestrator | Thursday 26 March 2026 02:12:21 +0000 (0:00:00.133) 0:00:04.246 ******** 2026-03-26 02:12:23.824115 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:12:23.824133 | orchestrator | 2026-03-26 02:12:23.824151 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-26 02:12:23.824190 | orchestrator | Thursday 26 March 2026 02:12:21 +0000 (0:00:00.119) 0:00:04.366 ******** 2026-03-26 02:12:23.824208 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:12:23.824225 | orchestrator | 2026-03-26 02:12:23.824242 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-26 02:12:23.824261 | orchestrator | Thursday 26 March 2026 02:12:22 +0000 (0:00:00.695) 0:00:05.062 ******** 2026-03-26 02:12:23.824280 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:12:23.824298 | orchestrator | 2026-03-26 02:12:23.824316 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-26 02:12:23.824331 | orchestrator | 2026-03-26 02:12:23.824348 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-26 02:12:23.824366 | orchestrator | Thursday 26 March 2026 02:12:22 +0000 (0:00:00.125) 0:00:05.187 ******** 2026-03-26 02:12:23.824383 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:12:23.824400 | orchestrator | 2026-03-26 02:12:23.824418 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-26 02:12:23.824437 | orchestrator | Thursday 26 March 2026 02:12:22 +0000 (0:00:00.122) 0:00:05.310 ******** 2026-03-26 02:12:23.824456 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:12:23.824474 | orchestrator | 2026-03-26 02:12:23.824492 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-26 02:12:23.824511 | orchestrator | Thursday 26 March 2026 02:12:23 +0000 (0:00:00.667) 0:00:05.977 ******** 2026-03-26 02:12:23.824595 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:12:23.824619 | orchestrator | 2026-03-26 02:12:23.824638 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-26 02:12:23.824657 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-26 02:12:23.824701 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-26 02:12:23.824721 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-26 02:12:23.824738 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-26 02:12:23.824757 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-26 02:12:23.824776 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-26 02:12:23.824795 | orchestrator | 2026-03-26 02:12:23.824814 | orchestrator | 2026-03-26 02:12:23.824833 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-26 02:12:23.824851 | orchestrator | Thursday 26 March 2026 02:12:23 +0000 (0:00:00.037) 0:00:06.015 ******** 2026-03-26 02:12:23.824882 | orchestrator | =============================================================================== 2026-03-26 02:12:23.824903 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.34s 2026-03-26 02:12:23.824923 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.81s 2026-03-26 02:12:23.824943 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.65s 2026-03-26 02:12:24.177686 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2026-03-26 02:12:36.432844 | orchestrator | 2026-03-26 02:12:36 | INFO  | Task 5436eb36-cc72-4e39-ad74-719ebae5ca2c (wait-for-connection) was prepared for execution. 2026-03-26 02:12:36.432947 | orchestrator | 2026-03-26 02:12:36 | INFO  | It takes a moment until task 5436eb36-cc72-4e39-ad74-719ebae5ca2c (wait-for-connection) has been started and output is visible here. 2026-03-26 02:12:52.880323 | orchestrator | 2026-03-26 02:12:52.880438 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2026-03-26 02:12:52.880456 | orchestrator | 2026-03-26 02:12:52.880468 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2026-03-26 02:12:52.880480 | orchestrator | Thursday 26 March 2026 02:12:40 +0000 (0:00:00.242) 0:00:00.242 ******** 2026-03-26 02:12:52.880497 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:12:52.880518 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:12:52.880535 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:12:52.880551 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:12:52.880568 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:12:52.880674 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:12:52.880691 | orchestrator | 2026-03-26 02:12:52.880709 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-26 02:12:52.880728 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-26 02:12:52.880748 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-26 02:12:52.880768 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-26 02:12:52.880788 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-26 02:12:52.880807 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-26 02:12:52.880826 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-26 02:12:52.880843 | orchestrator | 2026-03-26 02:12:52.880855 | orchestrator | 2026-03-26 02:12:52.880866 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-26 02:12:52.880878 | orchestrator | Thursday 26 March 2026 02:12:52 +0000 (0:00:11.540) 0:00:11.783 ******** 2026-03-26 02:12:52.880889 | orchestrator | =============================================================================== 2026-03-26 02:12:52.880900 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.54s 2026-03-26 02:12:53.226790 | orchestrator | + osism apply hddtemp 2026-03-26 02:13:05.515558 | orchestrator | 2026-03-26 02:13:05 | INFO  | Task c3f3a815-b5d4-49d2-b3ca-aab97cf0a9af (hddtemp) was prepared for execution. 2026-03-26 02:13:05.515707 | orchestrator | 2026-03-26 02:13:05 | INFO  | It takes a moment until task c3f3a815-b5d4-49d2-b3ca-aab97cf0a9af (hddtemp) has been started and output is visible here. 2026-03-26 02:13:33.977102 | orchestrator | 2026-03-26 02:13:33.977218 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2026-03-26 02:13:33.977233 | orchestrator | 2026-03-26 02:13:33.977244 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2026-03-26 02:13:33.977254 | orchestrator | Thursday 26 March 2026 02:13:09 +0000 (0:00:00.268) 0:00:00.268 ******** 2026-03-26 02:13:33.977264 | orchestrator | ok: [testbed-manager] 2026-03-26 02:13:33.977275 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:13:33.977285 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:13:33.977294 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:13:33.977304 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:13:33.977313 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:13:33.977323 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:13:33.977332 | orchestrator | 2026-03-26 02:13:33.977342 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2026-03-26 02:13:33.977352 | orchestrator | Thursday 26 March 2026 02:13:10 +0000 (0:00:00.792) 0:00:01.060 ******** 2026-03-26 02:13:33.977363 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-26 02:13:33.977398 | orchestrator | 2026-03-26 02:13:33.977409 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2026-03-26 02:13:33.977419 | orchestrator | Thursday 26 March 2026 02:13:12 +0000 (0:00:01.332) 0:00:02.393 ******** 2026-03-26 02:13:33.977428 | orchestrator | ok: [testbed-manager] 2026-03-26 02:13:33.977438 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:13:33.977447 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:13:33.977457 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:13:33.977467 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:13:33.977476 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:13:33.977486 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:13:33.977495 | orchestrator | 2026-03-26 02:13:33.977505 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2026-03-26 02:13:33.977529 | orchestrator | Thursday 26 March 2026 02:13:14 +0000 (0:00:02.024) 0:00:04.418 ******** 2026-03-26 02:13:33.977539 | orchestrator | changed: [testbed-manager] 2026-03-26 02:13:33.977549 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:13:33.977559 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:13:33.977568 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:13:33.977578 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:13:33.977587 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:13:33.977597 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:13:33.977606 | orchestrator | 2026-03-26 02:13:33.977651 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2026-03-26 02:13:33.977666 | orchestrator | Thursday 26 March 2026 02:13:15 +0000 (0:00:01.239) 0:00:05.658 ******** 2026-03-26 02:13:33.977678 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:13:33.977689 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:13:33.977700 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:13:33.977711 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:13:33.977722 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:13:33.977733 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:13:33.977745 | orchestrator | ok: [testbed-manager] 2026-03-26 02:13:33.977760 | orchestrator | 2026-03-26 02:13:33.977777 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2026-03-26 02:13:33.977804 | orchestrator | Thursday 26 March 2026 02:13:16 +0000 (0:00:01.295) 0:00:06.953 ******** 2026-03-26 02:13:33.977819 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:13:33.977835 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:13:33.977851 | orchestrator | changed: [testbed-manager] 2026-03-26 02:13:33.977866 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:13:33.977881 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:13:33.977896 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:13:33.977912 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:13:33.977928 | orchestrator | 2026-03-26 02:13:33.977943 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2026-03-26 02:13:33.977957 | orchestrator | Thursday 26 March 2026 02:13:17 +0000 (0:00:00.935) 0:00:07.889 ******** 2026-03-26 02:13:33.977974 | orchestrator | changed: [testbed-manager] 2026-03-26 02:13:33.977990 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:13:33.978007 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:13:33.978088 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:13:33.978101 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:13:33.978112 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:13:33.978121 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:13:33.978131 | orchestrator | 2026-03-26 02:13:33.978140 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2026-03-26 02:13:33.978150 | orchestrator | Thursday 26 March 2026 02:13:30 +0000 (0:00:12.465) 0:00:20.355 ******** 2026-03-26 02:13:33.978160 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-26 02:13:33.978182 | orchestrator | 2026-03-26 02:13:33.978192 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2026-03-26 02:13:33.978202 | orchestrator | Thursday 26 March 2026 02:13:31 +0000 (0:00:01.506) 0:00:21.861 ******** 2026-03-26 02:13:33.978211 | orchestrator | changed: [testbed-manager] 2026-03-26 02:13:33.978221 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:13:33.978237 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:13:33.978253 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:13:33.978269 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:13:33.978293 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:13:33.978311 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:13:33.978326 | orchestrator | 2026-03-26 02:13:33.978341 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-26 02:13:33.978356 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-26 02:13:33.978395 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-26 02:13:33.978412 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-26 02:13:33.978430 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-26 02:13:33.978447 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-26 02:13:33.978464 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-26 02:13:33.978480 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-26 02:13:33.978493 | orchestrator | 2026-03-26 02:13:33.978503 | orchestrator | 2026-03-26 02:13:33.978513 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-26 02:13:33.978522 | orchestrator | Thursday 26 March 2026 02:13:33 +0000 (0:00:01.969) 0:00:23.830 ******** 2026-03-26 02:13:33.978532 | orchestrator | =============================================================================== 2026-03-26 02:13:33.978541 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 12.47s 2026-03-26 02:13:33.978551 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 2.02s 2026-03-26 02:13:33.978561 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.97s 2026-03-26 02:13:33.978579 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.51s 2026-03-26 02:13:33.978592 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.33s 2026-03-26 02:13:33.978636 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.30s 2026-03-26 02:13:33.978660 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.24s 2026-03-26 02:13:33.978676 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.94s 2026-03-26 02:13:33.978691 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.79s 2026-03-26 02:13:34.341757 | orchestrator | ++ semver 9.5.0 7.1.1 2026-03-26 02:13:34.398218 | orchestrator | + [[ 1 -ge 0 ]] 2026-03-26 02:13:34.398289 | orchestrator | + sudo systemctl restart manager.service 2026-03-26 02:13:48.934228 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-03-26 02:13:48.934339 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-03-26 02:13:48.934354 | orchestrator | + local max_attempts=60 2026-03-26 02:13:48.934367 | orchestrator | + local name=ceph-ansible 2026-03-26 02:13:48.934378 | orchestrator | + local attempt_num=1 2026-03-26 02:13:48.934390 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-26 02:13:48.973086 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-26 02:13:48.973165 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-26 02:13:48.973174 | orchestrator | + sleep 5 2026-03-26 02:13:53.980927 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-26 02:13:54.021283 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-26 02:13:54.021390 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-26 02:13:54.021404 | orchestrator | + sleep 5 2026-03-26 02:13:59.024818 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-26 02:13:59.046796 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-26 02:13:59.046878 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-26 02:13:59.046887 | orchestrator | + sleep 5 2026-03-26 02:14:04.050985 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-26 02:14:04.086721 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-26 02:14:04.086811 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-26 02:14:04.086822 | orchestrator | + sleep 5 2026-03-26 02:14:09.090816 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-26 02:14:09.128599 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-26 02:14:09.128737 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-26 02:14:09.128753 | orchestrator | + sleep 5 2026-03-26 02:14:14.133462 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-26 02:14:14.174226 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-26 02:14:14.174328 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-26 02:14:14.174343 | orchestrator | + sleep 5 2026-03-26 02:14:19.178216 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-26 02:14:19.222168 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-26 02:14:19.222262 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-26 02:14:19.222276 | orchestrator | + sleep 5 2026-03-26 02:14:24.229799 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-26 02:14:24.262762 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-26 02:14:24.262873 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-26 02:14:24.262890 | orchestrator | + sleep 5 2026-03-26 02:14:29.265213 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-26 02:14:29.292127 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-26 02:14:29.292258 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-26 02:14:29.292319 | orchestrator | + sleep 5 2026-03-26 02:14:34.295799 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-26 02:14:34.341599 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-26 02:14:34.341737 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-26 02:14:34.341753 | orchestrator | + sleep 5 2026-03-26 02:14:39.347385 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-26 02:14:39.386482 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-26 02:14:39.386613 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-26 02:14:39.386638 | orchestrator | + sleep 5 2026-03-26 02:14:44.390467 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-26 02:14:44.433739 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-26 02:14:44.433812 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-26 02:14:44.433817 | orchestrator | + sleep 5 2026-03-26 02:14:49.438551 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-26 02:14:49.469468 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-26 02:14:49.469559 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-26 02:14:49.469569 | orchestrator | + sleep 5 2026-03-26 02:14:54.473461 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-26 02:14:54.500001 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-26 02:14:54.500079 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-03-26 02:14:54.500087 | orchestrator | + local max_attempts=60 2026-03-26 02:14:54.500094 | orchestrator | + local name=kolla-ansible 2026-03-26 02:14:54.500099 | orchestrator | + local attempt_num=1 2026-03-26 02:14:54.501578 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-03-26 02:14:54.537047 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-26 02:14:54.537123 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-03-26 02:14:54.537156 | orchestrator | + local max_attempts=60 2026-03-26 02:14:54.537162 | orchestrator | + local name=osism-ansible 2026-03-26 02:14:54.537168 | orchestrator | + local attempt_num=1 2026-03-26 02:14:54.538367 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-03-26 02:14:54.569795 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-26 02:14:54.569874 | orchestrator | + [[ true == \t\r\u\e ]] 2026-03-26 02:14:54.569884 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-03-26 02:14:54.737024 | orchestrator | ARA in ceph-ansible already disabled. 2026-03-26 02:14:54.889158 | orchestrator | ARA in kolla-ansible already disabled. 2026-03-26 02:14:55.064977 | orchestrator | ARA in osism-ansible already disabled. 2026-03-26 02:14:55.200888 | orchestrator | ARA in osism-kubernetes already disabled. 2026-03-26 02:14:55.201606 | orchestrator | + osism apply gather-facts 2026-03-26 02:15:07.651308 | orchestrator | 2026-03-26 02:15:07 | INFO  | Task 2b1a1c1b-3e66-41d1-9d10-d820120a0f50 (gather-facts) was prepared for execution. 2026-03-26 02:15:07.651452 | orchestrator | 2026-03-26 02:15:07 | INFO  | It takes a moment until task 2b1a1c1b-3e66-41d1-9d10-d820120a0f50 (gather-facts) has been started and output is visible here. 2026-03-26 02:15:21.491665 | orchestrator | 2026-03-26 02:15:21.491810 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-26 02:15:21.491829 | orchestrator | 2026-03-26 02:15:21.491841 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-26 02:15:21.491854 | orchestrator | Thursday 26 March 2026 02:15:12 +0000 (0:00:00.237) 0:00:00.237 ******** 2026-03-26 02:15:21.491866 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:15:21.491878 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:15:21.491889 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:15:21.491901 | orchestrator | ok: [testbed-manager] 2026-03-26 02:15:21.491911 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:15:21.491923 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:15:21.491933 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:15:21.491944 | orchestrator | 2026-03-26 02:15:21.491955 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-26 02:15:21.491966 | orchestrator | 2026-03-26 02:15:21.491977 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-26 02:15:21.491988 | orchestrator | Thursday 26 March 2026 02:15:20 +0000 (0:00:08.322) 0:00:08.559 ******** 2026-03-26 02:15:21.491999 | orchestrator | skipping: [testbed-manager] 2026-03-26 02:15:21.492011 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:15:21.492022 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:15:21.492033 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:15:21.492044 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:15:21.492055 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:15:21.492065 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:15:21.492076 | orchestrator | 2026-03-26 02:15:21.492087 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-26 02:15:21.492098 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-26 02:15:21.492111 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-26 02:15:21.492122 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-26 02:15:21.492133 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-26 02:15:21.492144 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-26 02:15:21.492155 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-26 02:15:21.492194 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-26 02:15:21.492207 | orchestrator | 2026-03-26 02:15:21.492219 | orchestrator | 2026-03-26 02:15:21.492231 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-26 02:15:21.492244 | orchestrator | Thursday 26 March 2026 02:15:21 +0000 (0:00:00.598) 0:00:09.158 ******** 2026-03-26 02:15:21.492260 | orchestrator | =============================================================================== 2026-03-26 02:15:21.492279 | orchestrator | Gathers facts about hosts ----------------------------------------------- 8.32s 2026-03-26 02:15:21.492306 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.60s 2026-03-26 02:15:21.821152 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2026-03-26 02:15:21.831688 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2026-03-26 02:15:21.844859 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2026-03-26 02:15:21.858594 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2026-03-26 02:15:21.870307 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2026-03-26 02:15:21.884901 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/320-openstack-minimal.sh /usr/local/bin/deploy-openstack-minimal 2026-03-26 02:15:21.897141 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2026-03-26 02:15:21.918301 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2026-03-26 02:15:21.940666 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2026-03-26 02:15:21.957007 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade-manager.sh /usr/local/bin/upgrade-manager 2026-03-26 02:15:21.972970 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2026-03-26 02:15:21.992422 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2026-03-26 02:15:22.009825 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2026-03-26 02:15:22.023516 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2026-03-26 02:15:22.044878 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/320-openstack-minimal.sh /usr/local/bin/upgrade-openstack-minimal 2026-03-26 02:15:22.057602 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2026-03-26 02:15:22.070697 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2026-03-26 02:15:22.087172 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2026-03-26 02:15:22.105859 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2026-03-26 02:15:22.121076 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2026-03-26 02:15:22.135457 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2026-03-26 02:15:22.148037 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2026-03-26 02:15:22.160556 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2026-03-26 02:15:22.180257 | orchestrator | + [[ false == \t\r\u\e ]] 2026-03-26 02:15:22.290135 | orchestrator | ok: Runtime: 0:25:05.319279 2026-03-26 02:15:22.387544 | 2026-03-26 02:15:22.387686 | TASK [Deploy services] 2026-03-26 02:15:23.107311 | orchestrator | 2026-03-26 02:15:23.107558 | orchestrator | # DEPLOY SERVICES 2026-03-26 02:15:23.107576 | orchestrator | 2026-03-26 02:15:23.107585 | orchestrator | + set -e 2026-03-26 02:15:23.107592 | orchestrator | + echo 2026-03-26 02:15:23.107600 | orchestrator | + echo '# DEPLOY SERVICES' 2026-03-26 02:15:23.107608 | orchestrator | + echo 2026-03-26 02:15:23.107637 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-26 02:15:23.107650 | orchestrator | ++ export INTERACTIVE=false 2026-03-26 02:15:23.107658 | orchestrator | ++ INTERACTIVE=false 2026-03-26 02:15:23.107665 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-26 02:15:23.107678 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-26 02:15:23.107685 | orchestrator | + source /opt/manager-vars.sh 2026-03-26 02:15:23.107693 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-26 02:15:23.107699 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-26 02:15:23.107728 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-26 02:15:23.107735 | orchestrator | ++ CEPH_VERSION=reef 2026-03-26 02:15:23.107743 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-26 02:15:23.107749 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-26 02:15:23.107759 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-26 02:15:23.107766 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-26 02:15:23.107772 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-26 02:15:23.107779 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-26 02:15:23.107785 | orchestrator | ++ export ARA=false 2026-03-26 02:15:23.107791 | orchestrator | ++ ARA=false 2026-03-26 02:15:23.107798 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-26 02:15:23.107804 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-26 02:15:23.107809 | orchestrator | ++ export TEMPEST=false 2026-03-26 02:15:23.107816 | orchestrator | ++ TEMPEST=false 2026-03-26 02:15:23.107822 | orchestrator | ++ export IS_ZUUL=true 2026-03-26 02:15:23.107828 | orchestrator | ++ IS_ZUUL=true 2026-03-26 02:15:23.107835 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.54 2026-03-26 02:15:23.107841 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.54 2026-03-26 02:15:23.107847 | orchestrator | ++ export EXTERNAL_API=false 2026-03-26 02:15:23.107852 | orchestrator | ++ EXTERNAL_API=false 2026-03-26 02:15:23.107858 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-26 02:15:23.107864 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-26 02:15:23.107870 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-26 02:15:23.107877 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-26 02:15:23.107883 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-26 02:15:23.107891 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-26 02:15:23.107895 | orchestrator | + sh -c /opt/configuration/scripts/pull-images.sh 2026-03-26 02:15:23.118352 | orchestrator | + set -e 2026-03-26 02:15:23.118471 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-26 02:15:23.118484 | orchestrator | ++ export INTERACTIVE=false 2026-03-26 02:15:23.118489 | orchestrator | ++ INTERACTIVE=false 2026-03-26 02:15:23.118493 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-26 02:15:23.118497 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-26 02:15:23.118514 | orchestrator | + source /opt/manager-vars.sh 2026-03-26 02:15:23.118518 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-26 02:15:23.118523 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-26 02:15:23.118526 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-26 02:15:23.118530 | orchestrator | ++ CEPH_VERSION=reef 2026-03-26 02:15:23.118534 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-26 02:15:23.118539 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-26 02:15:23.118567 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-26 02:15:23.118572 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-26 02:15:23.118576 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-26 02:15:23.118579 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-26 02:15:23.118583 | orchestrator | ++ export ARA=false 2026-03-26 02:15:23.118588 | orchestrator | ++ ARA=false 2026-03-26 02:15:23.118697 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-26 02:15:23.118719 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-26 02:15:23.118724 | orchestrator | ++ export TEMPEST=false 2026-03-26 02:15:23.118730 | orchestrator | ++ TEMPEST=false 2026-03-26 02:15:23.118734 | orchestrator | ++ export IS_ZUUL=true 2026-03-26 02:15:23.118738 | orchestrator | ++ IS_ZUUL=true 2026-03-26 02:15:23.118742 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.54 2026-03-26 02:15:23.118746 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.54 2026-03-26 02:15:23.118750 | orchestrator | ++ export EXTERNAL_API=false 2026-03-26 02:15:23.118754 | orchestrator | ++ EXTERNAL_API=false 2026-03-26 02:15:23.118757 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-26 02:15:23.118761 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-26 02:15:23.118765 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-26 02:15:23.118770 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-26 02:15:23.118794 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-26 02:15:23.118798 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-26 02:15:23.118804 | orchestrator | 2026-03-26 02:15:23.118809 | orchestrator | # PULL IMAGES 2026-03-26 02:15:23.118813 | orchestrator | + echo 2026-03-26 02:15:23.118816 | orchestrator | + echo '# PULL IMAGES' 2026-03-26 02:15:23.118820 | orchestrator | + echo 2026-03-26 02:15:23.118824 | orchestrator | 2026-03-26 02:15:23.120637 | orchestrator | ++ semver 9.5.0 7.0.0 2026-03-26 02:15:23.187614 | orchestrator | + [[ 1 -ge 0 ]] 2026-03-26 02:15:23.187692 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-03-26 02:15:25.291693 | orchestrator | 2026-03-26 02:15:25 | INFO  | Trying to run play pull-images in environment custom 2026-03-26 02:15:35.461988 | orchestrator | 2026-03-26 02:15:35 | INFO  | Task 06102fcd-70e1-4004-8e7e-1d35d1447f80 (pull-images) was prepared for execution. 2026-03-26 02:15:35.462187 | orchestrator | 2026-03-26 02:15:35 | INFO  | Task 06102fcd-70e1-4004-8e7e-1d35d1447f80 is running in background. No more output. Check ARA for logs. 2026-03-26 02:15:35.807055 | orchestrator | + sh -c /opt/configuration/scripts/deploy/001-helpers.sh 2026-03-26 02:15:47.918835 | orchestrator | 2026-03-26 02:15:47 | INFO  | Task 0e8cb748-e270-4d9d-9e6d-8d1250c8a56c (cgit) was prepared for execution. 2026-03-26 02:15:47.918968 | orchestrator | 2026-03-26 02:15:47 | INFO  | Task 0e8cb748-e270-4d9d-9e6d-8d1250c8a56c is running in background. No more output. Check ARA for logs. 2026-03-26 02:16:00.666349 | orchestrator | 2026-03-26 02:16:00 | INFO  | Task b3f22e88-54cd-4802-bc55-577a23115959 (dotfiles) was prepared for execution. 2026-03-26 02:16:00.666481 | orchestrator | 2026-03-26 02:16:00 | INFO  | Task b3f22e88-54cd-4802-bc55-577a23115959 is running in background. No more output. Check ARA for logs. 2026-03-26 02:16:13.411629 | orchestrator | 2026-03-26 02:16:13 | INFO  | Task 39bcad9c-bdae-4e22-b89c-b6f057833c05 (homer) was prepared for execution. 2026-03-26 02:16:13.411718 | orchestrator | 2026-03-26 02:16:13 | INFO  | Task 39bcad9c-bdae-4e22-b89c-b6f057833c05 is running in background. No more output. Check ARA for logs. 2026-03-26 02:16:26.314428 | orchestrator | 2026-03-26 02:16:26 | INFO  | Task 4212e8d9-a5f3-4e44-a65a-eec554ca52dc (phpmyadmin) was prepared for execution. 2026-03-26 02:16:26.314543 | orchestrator | 2026-03-26 02:16:26 | INFO  | Task 4212e8d9-a5f3-4e44-a65a-eec554ca52dc is running in background. No more output. Check ARA for logs. 2026-03-26 02:16:38.936648 | orchestrator | 2026-03-26 02:16:38 | INFO  | Task b5a31f83-3df5-4041-9b65-a060944e7586 (sosreport) was prepared for execution. 2026-03-26 02:16:38.936755 | orchestrator | 2026-03-26 02:16:38 | INFO  | Task b5a31f83-3df5-4041-9b65-a060944e7586 is running in background. No more output. Check ARA for logs. 2026-03-26 02:16:39.336126 | orchestrator | + sh -c /opt/configuration/scripts/deploy/500-kubernetes.sh 2026-03-26 02:16:39.342991 | orchestrator | + set -e 2026-03-26 02:16:39.343082 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-26 02:16:39.343098 | orchestrator | ++ export INTERACTIVE=false 2026-03-26 02:16:39.343110 | orchestrator | ++ INTERACTIVE=false 2026-03-26 02:16:39.343123 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-26 02:16:39.343135 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-26 02:16:39.343146 | orchestrator | + source /opt/manager-vars.sh 2026-03-26 02:16:39.343157 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-26 02:16:39.343169 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-26 02:16:39.343180 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-26 02:16:39.343191 | orchestrator | ++ CEPH_VERSION=reef 2026-03-26 02:16:39.343202 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-26 02:16:39.343214 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-26 02:16:39.343225 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-26 02:16:39.343258 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-26 02:16:39.343270 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-26 02:16:39.343291 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-26 02:16:39.343303 | orchestrator | ++ export ARA=false 2026-03-26 02:16:39.343314 | orchestrator | ++ ARA=false 2026-03-26 02:16:39.343325 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-26 02:16:39.343365 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-26 02:16:39.343377 | orchestrator | ++ export TEMPEST=false 2026-03-26 02:16:39.343388 | orchestrator | ++ TEMPEST=false 2026-03-26 02:16:39.343399 | orchestrator | ++ export IS_ZUUL=true 2026-03-26 02:16:39.343410 | orchestrator | ++ IS_ZUUL=true 2026-03-26 02:16:39.343437 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.54 2026-03-26 02:16:39.343454 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.54 2026-03-26 02:16:39.343466 | orchestrator | ++ export EXTERNAL_API=false 2026-03-26 02:16:39.343477 | orchestrator | ++ EXTERNAL_API=false 2026-03-26 02:16:39.343488 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-26 02:16:39.343499 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-26 02:16:39.343510 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-26 02:16:39.343521 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-26 02:16:39.343532 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-26 02:16:39.343545 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-26 02:16:39.343631 | orchestrator | ++ semver 9.5.0 8.0.3 2026-03-26 02:16:39.436153 | orchestrator | + [[ 1 -ge 0 ]] 2026-03-26 02:16:39.436250 | orchestrator | + osism apply frr 2026-03-26 02:16:51.866152 | orchestrator | 2026-03-26 02:16:51 | INFO  | Task 3d590088-fe2b-4236-a786-40d95fe4bd4a (frr) was prepared for execution. 2026-03-26 02:16:51.866280 | orchestrator | 2026-03-26 02:16:51 | INFO  | It takes a moment until task 3d590088-fe2b-4236-a786-40d95fe4bd4a (frr) has been started and output is visible here. 2026-03-26 02:17:30.415054 | orchestrator | 2026-03-26 02:17:30.415147 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-03-26 02:17:30.415158 | orchestrator | 2026-03-26 02:17:30.415165 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-03-26 02:17:30.415175 | orchestrator | Thursday 26 March 2026 02:16:58 +0000 (0:00:00.547) 0:00:00.547 ******** 2026-03-26 02:17:30.415181 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-03-26 02:17:30.415188 | orchestrator | 2026-03-26 02:17:30.415194 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-03-26 02:17:30.415200 | orchestrator | Thursday 26 March 2026 02:16:59 +0000 (0:00:00.632) 0:00:01.180 ******** 2026-03-26 02:17:30.415206 | orchestrator | changed: [testbed-manager] 2026-03-26 02:17:30.415213 | orchestrator | 2026-03-26 02:17:30.415219 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-03-26 02:17:30.415226 | orchestrator | Thursday 26 March 2026 02:17:04 +0000 (0:00:05.690) 0:00:06.871 ******** 2026-03-26 02:17:30.415232 | orchestrator | changed: [testbed-manager] 2026-03-26 02:17:30.415237 | orchestrator | 2026-03-26 02:17:30.415243 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-03-26 02:17:30.415249 | orchestrator | Thursday 26 March 2026 02:17:17 +0000 (0:00:13.118) 0:00:19.989 ******** 2026-03-26 02:17:30.415254 | orchestrator | ok: [testbed-manager] 2026-03-26 02:17:30.415261 | orchestrator | 2026-03-26 02:17:30.415267 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-03-26 02:17:30.415272 | orchestrator | Thursday 26 March 2026 02:17:19 +0000 (0:00:01.409) 0:00:21.398 ******** 2026-03-26 02:17:30.415278 | orchestrator | changed: [testbed-manager] 2026-03-26 02:17:30.415283 | orchestrator | 2026-03-26 02:17:30.415289 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-03-26 02:17:30.415295 | orchestrator | Thursday 26 March 2026 02:17:21 +0000 (0:00:01.636) 0:00:23.035 ******** 2026-03-26 02:17:30.415300 | orchestrator | ok: [testbed-manager] 2026-03-26 02:17:30.415306 | orchestrator | 2026-03-26 02:17:30.415311 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-03-26 02:17:30.415318 | orchestrator | Thursday 26 March 2026 02:17:22 +0000 (0:00:01.493) 0:00:24.528 ******** 2026-03-26 02:17:30.415324 | orchestrator | skipping: [testbed-manager] 2026-03-26 02:17:30.415329 | orchestrator | 2026-03-26 02:17:30.415335 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-03-26 02:17:30.415341 | orchestrator | Thursday 26 March 2026 02:17:22 +0000 (0:00:00.158) 0:00:24.686 ******** 2026-03-26 02:17:30.415369 | orchestrator | skipping: [testbed-manager] 2026-03-26 02:17:30.415380 | orchestrator | 2026-03-26 02:17:30.415388 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-03-26 02:17:30.415398 | orchestrator | Thursday 26 March 2026 02:17:22 +0000 (0:00:00.193) 0:00:24.879 ******** 2026-03-26 02:17:30.415407 | orchestrator | changed: [testbed-manager] 2026-03-26 02:17:30.415415 | orchestrator | 2026-03-26 02:17:30.415423 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-03-26 02:17:30.415431 | orchestrator | Thursday 26 March 2026 02:17:23 +0000 (0:00:00.979) 0:00:25.859 ******** 2026-03-26 02:17:30.415441 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-03-26 02:17:30.415450 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-03-26 02:17:30.415461 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-03-26 02:17:30.415470 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-03-26 02:17:30.415491 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-03-26 02:17:30.415502 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-03-26 02:17:30.415510 | orchestrator | 2026-03-26 02:17:30.415519 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-03-26 02:17:30.415527 | orchestrator | Thursday 26 March 2026 02:17:26 +0000 (0:00:02.627) 0:00:28.487 ******** 2026-03-26 02:17:30.415535 | orchestrator | ok: [testbed-manager] 2026-03-26 02:17:30.415544 | orchestrator | 2026-03-26 02:17:30.415552 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2026-03-26 02:17:30.415561 | orchestrator | Thursday 26 March 2026 02:17:28 +0000 (0:00:02.013) 0:00:30.500 ******** 2026-03-26 02:17:30.415570 | orchestrator | changed: [testbed-manager] 2026-03-26 02:17:30.415580 | orchestrator | 2026-03-26 02:17:30.415589 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-26 02:17:30.415599 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-26 02:17:30.415606 | orchestrator | 2026-03-26 02:17:30.415612 | orchestrator | 2026-03-26 02:17:30.415624 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-26 02:17:30.415631 | orchestrator | Thursday 26 March 2026 02:17:30 +0000 (0:00:01.549) 0:00:32.050 ******** 2026-03-26 02:17:30.415637 | orchestrator | =============================================================================== 2026-03-26 02:17:30.415644 | orchestrator | osism.services.frr : Install frr package ------------------------------- 13.12s 2026-03-26 02:17:30.415650 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 5.69s 2026-03-26 02:17:30.415659 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 2.63s 2026-03-26 02:17:30.415668 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 2.01s 2026-03-26 02:17:30.415678 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 1.64s 2026-03-26 02:17:30.415707 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.55s 2026-03-26 02:17:30.415718 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.49s 2026-03-26 02:17:30.415728 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.41s 2026-03-26 02:17:30.415738 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 0.98s 2026-03-26 02:17:30.415748 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.63s 2026-03-26 02:17:30.415759 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 0.19s 2026-03-26 02:17:30.415768 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.16s 2026-03-26 02:17:30.812747 | orchestrator | + osism apply kubernetes 2026-03-26 02:17:33.353899 | orchestrator | 2026-03-26 02:17:33 | INFO  | Task 5f21ae79-fb4c-403b-8e3d-454a1387a6c5 (kubernetes) was prepared for execution. 2026-03-26 02:17:33.354091 | orchestrator | 2026-03-26 02:17:33 | INFO  | It takes a moment until task 5f21ae79-fb4c-403b-8e3d-454a1387a6c5 (kubernetes) has been started and output is visible here. 2026-03-26 02:17:59.910524 | orchestrator | 2026-03-26 02:17:59.910622 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-03-26 02:17:59.910633 | orchestrator | 2026-03-26 02:17:59.910640 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-03-26 02:17:59.910648 | orchestrator | Thursday 26 March 2026 02:17:38 +0000 (0:00:00.182) 0:00:00.182 ******** 2026-03-26 02:17:59.910655 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:17:59.910662 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:17:59.910668 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:17:59.910675 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:17:59.910681 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:17:59.910687 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:17:59.910693 | orchestrator | 2026-03-26 02:17:59.910699 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-03-26 02:17:59.910705 | orchestrator | Thursday 26 March 2026 02:17:39 +0000 (0:00:00.753) 0:00:00.935 ******** 2026-03-26 02:17:59.910711 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:17:59.910718 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:17:59.910724 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:17:59.910730 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:17:59.910736 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:17:59.910742 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:17:59.910748 | orchestrator | 2026-03-26 02:17:59.910754 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-03-26 02:17:59.910762 | orchestrator | Thursday 26 March 2026 02:17:40 +0000 (0:00:00.641) 0:00:01.577 ******** 2026-03-26 02:17:59.910768 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:17:59.910774 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:17:59.910780 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:17:59.910786 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:17:59.910792 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:17:59.910798 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:17:59.910804 | orchestrator | 2026-03-26 02:17:59.910810 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-03-26 02:17:59.910816 | orchestrator | Thursday 26 March 2026 02:17:40 +0000 (0:00:00.818) 0:00:02.396 ******** 2026-03-26 02:17:59.910821 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:17:59.910872 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:17:59.910879 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:17:59.910889 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:17:59.910895 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:17:59.910901 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:17:59.910906 | orchestrator | 2026-03-26 02:17:59.910913 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-03-26 02:17:59.910919 | orchestrator | Thursday 26 March 2026 02:17:42 +0000 (0:00:02.007) 0:00:04.403 ******** 2026-03-26 02:17:59.910925 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:17:59.910931 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:17:59.910937 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:17:59.910943 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:17:59.910949 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:17:59.910955 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:17:59.910961 | orchestrator | 2026-03-26 02:17:59.910967 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-03-26 02:17:59.910973 | orchestrator | Thursday 26 March 2026 02:17:45 +0000 (0:00:02.155) 0:00:06.558 ******** 2026-03-26 02:17:59.910979 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:17:59.911002 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:17:59.911008 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:17:59.911014 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:17:59.911020 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:17:59.911026 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:17:59.911032 | orchestrator | 2026-03-26 02:17:59.911043 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-03-26 02:17:59.911049 | orchestrator | Thursday 26 March 2026 02:17:46 +0000 (0:00:01.182) 0:00:07.741 ******** 2026-03-26 02:17:59.911055 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:17:59.911061 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:17:59.911066 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:17:59.911073 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:17:59.911079 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:17:59.911086 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:17:59.911092 | orchestrator | 2026-03-26 02:17:59.911098 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-03-26 02:17:59.911105 | orchestrator | Thursday 26 March 2026 02:17:47 +0000 (0:00:00.898) 0:00:08.639 ******** 2026-03-26 02:17:59.911111 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:17:59.911118 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:17:59.911124 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:17:59.911130 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:17:59.911136 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:17:59.911143 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:17:59.911149 | orchestrator | 2026-03-26 02:17:59.911155 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-03-26 02:17:59.911162 | orchestrator | Thursday 26 March 2026 02:17:47 +0000 (0:00:00.604) 0:00:09.244 ******** 2026-03-26 02:17:59.911168 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-26 02:17:59.911175 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-26 02:17:59.911181 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:17:59.911188 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-26 02:17:59.911194 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-26 02:17:59.911199 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:17:59.911205 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-26 02:17:59.911211 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-26 02:17:59.911217 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:17:59.911223 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-26 02:17:59.911241 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-26 02:17:59.911247 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:17:59.911253 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-26 02:17:59.911259 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-26 02:17:59.911265 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:17:59.911271 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-26 02:17:59.911277 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-26 02:17:59.911282 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:17:59.911288 | orchestrator | 2026-03-26 02:17:59.911294 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-03-26 02:17:59.911300 | orchestrator | Thursday 26 March 2026 02:17:48 +0000 (0:00:00.709) 0:00:09.953 ******** 2026-03-26 02:17:59.911306 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:17:59.911312 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:17:59.911318 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:17:59.911328 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:17:59.911334 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:17:59.911340 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:17:59.911345 | orchestrator | 2026-03-26 02:17:59.911351 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-03-26 02:17:59.911358 | orchestrator | Thursday 26 March 2026 02:17:49 +0000 (0:00:01.295) 0:00:11.249 ******** 2026-03-26 02:17:59.911364 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:17:59.911370 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:17:59.911376 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:17:59.911382 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:17:59.911387 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:17:59.911393 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:17:59.911399 | orchestrator | 2026-03-26 02:17:59.911405 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-03-26 02:17:59.911411 | orchestrator | Thursday 26 March 2026 02:17:50 +0000 (0:00:00.855) 0:00:12.104 ******** 2026-03-26 02:17:59.911417 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:17:59.911423 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:17:59.911428 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:17:59.911434 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:17:59.911440 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:17:59.911446 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:17:59.911452 | orchestrator | 2026-03-26 02:17:59.911457 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-03-26 02:17:59.911463 | orchestrator | Thursday 26 March 2026 02:17:55 +0000 (0:00:04.830) 0:00:16.935 ******** 2026-03-26 02:17:59.911469 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:17:59.911479 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:17:59.911485 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:17:59.911491 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:17:59.911496 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:17:59.911502 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:17:59.911508 | orchestrator | 2026-03-26 02:17:59.911514 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-03-26 02:17:59.911520 | orchestrator | Thursday 26 March 2026 02:17:56 +0000 (0:00:01.074) 0:00:18.009 ******** 2026-03-26 02:17:59.911526 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:17:59.911532 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:17:59.911537 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:17:59.911543 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:17:59.911549 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:17:59.911555 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:17:59.911561 | orchestrator | 2026-03-26 02:17:59.911567 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-03-26 02:17:59.911574 | orchestrator | Thursday 26 March 2026 02:17:57 +0000 (0:00:01.474) 0:00:19.483 ******** 2026-03-26 02:17:59.911580 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:17:59.911585 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:17:59.911591 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:17:59.911597 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:17:59.911603 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:17:59.911609 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:17:59.911614 | orchestrator | 2026-03-26 02:17:59.911620 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-03-26 02:17:59.911626 | orchestrator | Thursday 26 March 2026 02:17:58 +0000 (0:00:00.776) 0:00:20.260 ******** 2026-03-26 02:17:59.911632 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-03-26 02:17:59.911643 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-03-26 02:17:59.911648 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-03-26 02:17:59.911654 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-03-26 02:17:59.911664 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:17:59.911670 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:17:59.911676 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-03-26 02:17:59.911682 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-03-26 02:17:59.911688 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-03-26 02:17:59.911694 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-03-26 02:17:59.911699 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:17:59.911705 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:17:59.911711 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-03-26 02:17:59.911717 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-03-26 02:17:59.911723 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:17:59.911728 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-03-26 02:17:59.911734 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-03-26 02:17:59.911740 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:17:59.911746 | orchestrator | 2026-03-26 02:17:59.911752 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-03-26 02:17:59.911761 | orchestrator | Thursday 26 March 2026 02:17:59 +0000 (0:00:01.121) 0:00:21.381 ******** 2026-03-26 02:19:18.207473 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:19:18.207558 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:19:18.207564 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:19:18.207569 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:19:18.207573 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:19:18.207577 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:19:18.207582 | orchestrator | 2026-03-26 02:19:18.207587 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-03-26 02:19:18.207608 | orchestrator | Thursday 26 March 2026 02:18:00 +0000 (0:00:00.675) 0:00:22.057 ******** 2026-03-26 02:19:18.207612 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:19:18.207616 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:19:18.207620 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:19:18.207624 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:19:18.207629 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:19:18.207633 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:19:18.207637 | orchestrator | 2026-03-26 02:19:18.207641 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-03-26 02:19:18.207645 | orchestrator | 2026-03-26 02:19:18.207649 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-03-26 02:19:18.207654 | orchestrator | Thursday 26 March 2026 02:18:01 +0000 (0:00:01.276) 0:00:23.334 ******** 2026-03-26 02:19:18.207658 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:19:18.207663 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:19:18.207667 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:19:18.207670 | orchestrator | 2026-03-26 02:19:18.207674 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-03-26 02:19:18.207678 | orchestrator | Thursday 26 March 2026 02:18:03 +0000 (0:00:01.765) 0:00:25.099 ******** 2026-03-26 02:19:18.207682 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:19:18.207686 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:19:18.207690 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:19:18.207694 | orchestrator | 2026-03-26 02:19:18.207706 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-03-26 02:19:18.207711 | orchestrator | Thursday 26 March 2026 02:18:05 +0000 (0:00:01.539) 0:00:26.639 ******** 2026-03-26 02:19:18.207714 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:19:18.207724 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:19:18.207728 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:19:18.207733 | orchestrator | 2026-03-26 02:19:18.207737 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-03-26 02:19:18.207741 | orchestrator | Thursday 26 March 2026 02:18:06 +0000 (0:00:01.156) 0:00:27.795 ******** 2026-03-26 02:19:18.207759 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:19:18.207763 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:19:18.207766 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:19:18.207770 | orchestrator | 2026-03-26 02:19:18.207774 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-03-26 02:19:18.207778 | orchestrator | Thursday 26 March 2026 02:18:07 +0000 (0:00:00.839) 0:00:28.635 ******** 2026-03-26 02:19:18.207782 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:19:18.207786 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:19:18.207790 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:19:18.207793 | orchestrator | 2026-03-26 02:19:18.207797 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-03-26 02:19:18.207814 | orchestrator | Thursday 26 March 2026 02:18:07 +0000 (0:00:00.419) 0:00:29.054 ******** 2026-03-26 02:19:18.207818 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:19:18.207822 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:19:18.207826 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:19:18.207830 | orchestrator | 2026-03-26 02:19:18.207834 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-03-26 02:19:18.207837 | orchestrator | Thursday 26 March 2026 02:18:08 +0000 (0:00:01.204) 0:00:30.259 ******** 2026-03-26 02:19:18.207841 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:19:18.207845 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:19:18.207849 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:19:18.207853 | orchestrator | 2026-03-26 02:19:18.207857 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-03-26 02:19:18.207861 | orchestrator | Thursday 26 March 2026 02:18:10 +0000 (0:00:01.634) 0:00:31.893 ******** 2026-03-26 02:19:18.207865 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 02:19:18.207869 | orchestrator | 2026-03-26 02:19:18.207873 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-03-26 02:19:18.207877 | orchestrator | Thursday 26 March 2026 02:18:11 +0000 (0:00:00.709) 0:00:32.603 ******** 2026-03-26 02:19:18.207957 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:19:18.207962 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:19:18.207966 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:19:18.207970 | orchestrator | 2026-03-26 02:19:18.207974 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-03-26 02:19:18.207977 | orchestrator | Thursday 26 March 2026 02:18:14 +0000 (0:00:02.900) 0:00:35.503 ******** 2026-03-26 02:19:18.207981 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:19:18.207985 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:19:18.207989 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:19:18.207993 | orchestrator | 2026-03-26 02:19:18.207997 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-03-26 02:19:18.208001 | orchestrator | Thursday 26 March 2026 02:18:14 +0000 (0:00:00.553) 0:00:36.056 ******** 2026-03-26 02:19:18.208004 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:19:18.208008 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:19:18.208012 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:19:18.208016 | orchestrator | 2026-03-26 02:19:18.208020 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-03-26 02:19:18.208023 | orchestrator | Thursday 26 March 2026 02:18:15 +0000 (0:00:00.796) 0:00:36.853 ******** 2026-03-26 02:19:18.208027 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:19:18.208032 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:19:18.208036 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:19:18.208041 | orchestrator | 2026-03-26 02:19:18.208045 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-03-26 02:19:18.208061 | orchestrator | Thursday 26 March 2026 02:18:16 +0000 (0:00:01.279) 0:00:38.133 ******** 2026-03-26 02:19:18.208066 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:19:18.208077 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:19:18.208082 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:19:18.208086 | orchestrator | 2026-03-26 02:19:18.208090 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-03-26 02:19:18.208095 | orchestrator | Thursday 26 March 2026 02:18:17 +0000 (0:00:00.802) 0:00:38.935 ******** 2026-03-26 02:19:18.208099 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:19:18.208104 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:19:18.208108 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:19:18.208112 | orchestrator | 2026-03-26 02:19:18.208117 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-03-26 02:19:18.208121 | orchestrator | Thursday 26 March 2026 02:18:17 +0000 (0:00:00.398) 0:00:39.334 ******** 2026-03-26 02:19:18.208125 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:19:18.208130 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:19:18.208134 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:19:18.208138 | orchestrator | 2026-03-26 02:19:18.208146 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-03-26 02:19:18.208151 | orchestrator | Thursday 26 March 2026 02:18:18 +0000 (0:00:01.123) 0:00:40.457 ******** 2026-03-26 02:19:18.208155 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:19:18.208159 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:19:18.208164 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:19:18.208168 | orchestrator | 2026-03-26 02:19:18.208172 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-03-26 02:19:18.208177 | orchestrator | Thursday 26 March 2026 02:18:22 +0000 (0:00:03.250) 0:00:43.707 ******** 2026-03-26 02:19:18.208181 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:19:18.208185 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:19:18.208190 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:19:18.208198 | orchestrator | 2026-03-26 02:19:18.208202 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-03-26 02:19:18.208207 | orchestrator | Thursday 26 March 2026 02:18:22 +0000 (0:00:00.530) 0:00:44.238 ******** 2026-03-26 02:19:18.208211 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-26 02:19:18.208217 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-26 02:19:18.208221 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-26 02:19:18.208226 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-26 02:19:18.208230 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-26 02:19:18.208234 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-26 02:19:18.208239 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-03-26 02:19:18.208243 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-03-26 02:19:18.208248 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-03-26 02:19:18.208252 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-03-26 02:19:18.208256 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-03-26 02:19:18.208265 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-03-26 02:19:18.208269 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-03-26 02:19:18.208274 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-03-26 02:19:18.208278 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-03-26 02:19:18.208282 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:19:18.208287 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:19:18.208291 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:19:18.208295 | orchestrator | 2026-03-26 02:19:18.208303 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-03-26 02:19:18.208308 | orchestrator | Thursday 26 March 2026 02:19:16 +0000 (0:00:54.114) 0:01:38.353 ******** 2026-03-26 02:19:18.208312 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:19:18.208317 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:19:18.208321 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:19:18.208325 | orchestrator | 2026-03-26 02:19:18.208329 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-03-26 02:19:18.208334 | orchestrator | Thursday 26 March 2026 02:19:17 +0000 (0:00:00.342) 0:01:38.695 ******** 2026-03-26 02:19:18.208341 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:20:01.617892 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:20:01.618215 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:20:01.618247 | orchestrator | 2026-03-26 02:20:01.618269 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-03-26 02:20:01.618290 | orchestrator | Thursday 26 March 2026 02:19:18 +0000 (0:00:00.993) 0:01:39.689 ******** 2026-03-26 02:20:01.618311 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:20:01.618332 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:20:01.618351 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:20:01.618372 | orchestrator | 2026-03-26 02:20:01.618392 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-03-26 02:20:01.618411 | orchestrator | Thursday 26 March 2026 02:19:19 +0000 (0:00:01.187) 0:01:40.877 ******** 2026-03-26 02:20:01.618430 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:20:01.618451 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:20:01.618472 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:20:01.618492 | orchestrator | 2026-03-26 02:20:01.618514 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-03-26 02:20:01.618534 | orchestrator | Thursday 26 March 2026 02:19:46 +0000 (0:00:27.412) 0:02:08.290 ******** 2026-03-26 02:20:01.618554 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:20:01.618568 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:20:01.618580 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:20:01.618593 | orchestrator | 2026-03-26 02:20:01.618607 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-03-26 02:20:01.618620 | orchestrator | Thursday 26 March 2026 02:19:47 +0000 (0:00:00.649) 0:02:08.940 ******** 2026-03-26 02:20:01.618633 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:20:01.618646 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:20:01.618658 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:20:01.618671 | orchestrator | 2026-03-26 02:20:01.618684 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-03-26 02:20:01.618697 | orchestrator | Thursday 26 March 2026 02:19:48 +0000 (0:00:00.639) 0:02:09.580 ******** 2026-03-26 02:20:01.618710 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:20:01.618722 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:20:01.618735 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:20:01.618748 | orchestrator | 2026-03-26 02:20:01.618760 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-03-26 02:20:01.618802 | orchestrator | Thursday 26 March 2026 02:19:48 +0000 (0:00:00.667) 0:02:10.248 ******** 2026-03-26 02:20:01.618813 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:20:01.618824 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:20:01.618835 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:20:01.618845 | orchestrator | 2026-03-26 02:20:01.618856 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-03-26 02:20:01.618867 | orchestrator | Thursday 26 March 2026 02:19:49 +0000 (0:00:00.819) 0:02:11.067 ******** 2026-03-26 02:20:01.618878 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:20:01.618889 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:20:01.618900 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:20:01.618951 | orchestrator | 2026-03-26 02:20:01.618965 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-03-26 02:20:01.618976 | orchestrator | Thursday 26 March 2026 02:19:49 +0000 (0:00:00.332) 0:02:11.400 ******** 2026-03-26 02:20:01.618987 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:20:01.618998 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:20:01.619008 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:20:01.619019 | orchestrator | 2026-03-26 02:20:01.619030 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-03-26 02:20:01.619041 | orchestrator | Thursday 26 March 2026 02:19:50 +0000 (0:00:00.665) 0:02:12.066 ******** 2026-03-26 02:20:01.619052 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:20:01.619063 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:20:01.619075 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:20:01.619086 | orchestrator | 2026-03-26 02:20:01.619097 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-03-26 02:20:01.619108 | orchestrator | Thursday 26 March 2026 02:19:51 +0000 (0:00:00.672) 0:02:12.739 ******** 2026-03-26 02:20:01.619119 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:20:01.619130 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:20:01.619141 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:20:01.619152 | orchestrator | 2026-03-26 02:20:01.619164 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-03-26 02:20:01.619175 | orchestrator | Thursday 26 March 2026 02:19:52 +0000 (0:00:01.122) 0:02:13.861 ******** 2026-03-26 02:20:01.619188 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:20:01.619199 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:20:01.619210 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:20:01.619221 | orchestrator | 2026-03-26 02:20:01.619232 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-03-26 02:20:01.619243 | orchestrator | Thursday 26 March 2026 02:19:53 +0000 (0:00:00.881) 0:02:14.742 ******** 2026-03-26 02:20:01.619253 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:20:01.619264 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:20:01.619275 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:20:01.619286 | orchestrator | 2026-03-26 02:20:01.619297 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-03-26 02:20:01.619308 | orchestrator | Thursday 26 March 2026 02:19:53 +0000 (0:00:00.310) 0:02:15.053 ******** 2026-03-26 02:20:01.619319 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:20:01.619330 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:20:01.619341 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:20:01.619352 | orchestrator | 2026-03-26 02:20:01.619363 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-03-26 02:20:01.619374 | orchestrator | Thursday 26 March 2026 02:19:53 +0000 (0:00:00.344) 0:02:15.397 ******** 2026-03-26 02:20:01.619385 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:20:01.619395 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:20:01.619406 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:20:01.619417 | orchestrator | 2026-03-26 02:20:01.619431 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-03-26 02:20:01.619450 | orchestrator | Thursday 26 March 2026 02:19:54 +0000 (0:00:00.639) 0:02:16.037 ******** 2026-03-26 02:20:01.619480 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:20:01.619500 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:20:01.619544 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:20:01.619567 | orchestrator | 2026-03-26 02:20:01.619588 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-03-26 02:20:01.619608 | orchestrator | Thursday 26 March 2026 02:19:55 +0000 (0:00:00.907) 0:02:16.944 ******** 2026-03-26 02:20:01.619627 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-26 02:20:01.619647 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-26 02:20:01.619666 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-26 02:20:01.619686 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-26 02:20:01.619706 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-26 02:20:01.619725 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-26 02:20:01.619743 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-26 02:20:01.619764 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-26 02:20:01.619785 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-26 02:20:01.619804 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-03-26 02:20:01.619821 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-26 02:20:01.619841 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-26 02:20:01.619860 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-03-26 02:20:01.619878 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-26 02:20:01.619895 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-26 02:20:01.619939 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-26 02:20:01.619971 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-26 02:20:01.619991 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-26 02:20:01.620010 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-26 02:20:01.620029 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-26 02:20:01.620047 | orchestrator | 2026-03-26 02:20:01.620064 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-03-26 02:20:01.620075 | orchestrator | 2026-03-26 02:20:01.620086 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-03-26 02:20:01.620097 | orchestrator | Thursday 26 March 2026 02:19:58 +0000 (0:00:03.018) 0:02:19.963 ******** 2026-03-26 02:20:01.620107 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:20:01.620118 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:20:01.620129 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:20:01.620140 | orchestrator | 2026-03-26 02:20:01.620168 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-03-26 02:20:01.620179 | orchestrator | Thursday 26 March 2026 02:19:58 +0000 (0:00:00.350) 0:02:20.314 ******** 2026-03-26 02:20:01.620190 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:20:01.620201 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:20:01.620212 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:20:01.620233 | orchestrator | 2026-03-26 02:20:01.620244 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-03-26 02:20:01.620255 | orchestrator | Thursday 26 March 2026 02:19:59 +0000 (0:00:00.880) 0:02:21.195 ******** 2026-03-26 02:20:01.620266 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:20:01.620277 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:20:01.620288 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:20:01.620310 | orchestrator | 2026-03-26 02:20:01.620321 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-03-26 02:20:01.620332 | orchestrator | Thursday 26 March 2026 02:20:00 +0000 (0:00:00.323) 0:02:21.518 ******** 2026-03-26 02:20:01.620343 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-26 02:20:01.620355 | orchestrator | 2026-03-26 02:20:01.620366 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-03-26 02:20:01.620377 | orchestrator | Thursday 26 March 2026 02:20:00 +0000 (0:00:00.493) 0:02:22.012 ******** 2026-03-26 02:20:01.620388 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:20:01.620399 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:20:01.620410 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:20:01.620421 | orchestrator | 2026-03-26 02:20:01.620432 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-03-26 02:20:01.620442 | orchestrator | Thursday 26 March 2026 02:20:01 +0000 (0:00:00.563) 0:02:22.575 ******** 2026-03-26 02:20:01.620453 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:20:01.620464 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:20:01.620475 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:20:01.620486 | orchestrator | 2026-03-26 02:20:01.620497 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-03-26 02:20:01.620508 | orchestrator | Thursday 26 March 2026 02:20:01 +0000 (0:00:00.353) 0:02:22.929 ******** 2026-03-26 02:20:01.620529 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:21:41.876307 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:21:41.876399 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:21:41.876411 | orchestrator | 2026-03-26 02:21:41.876421 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-03-26 02:21:41.876430 | orchestrator | Thursday 26 March 2026 02:20:01 +0000 (0:00:00.325) 0:02:23.254 ******** 2026-03-26 02:21:41.876438 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:21:41.876445 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:21:41.876452 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:21:41.876458 | orchestrator | 2026-03-26 02:21:41.876466 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-03-26 02:21:41.876473 | orchestrator | Thursday 26 March 2026 02:20:02 +0000 (0:00:00.631) 0:02:23.886 ******** 2026-03-26 02:21:41.876480 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:21:41.876487 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:21:41.876494 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:21:41.876501 | orchestrator | 2026-03-26 02:21:41.876508 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-03-26 02:21:41.876515 | orchestrator | Thursday 26 March 2026 02:20:03 +0000 (0:00:01.410) 0:02:25.296 ******** 2026-03-26 02:21:41.876521 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:21:41.876528 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:21:41.876535 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:21:41.876542 | orchestrator | 2026-03-26 02:21:41.876549 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-03-26 02:21:41.876556 | orchestrator | Thursday 26 March 2026 02:20:05 +0000 (0:00:01.225) 0:02:26.522 ******** 2026-03-26 02:21:41.876563 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:21:41.876570 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:21:41.876576 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:21:41.876583 | orchestrator | 2026-03-26 02:21:41.876604 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-03-26 02:21:41.876640 | orchestrator | 2026-03-26 02:21:41.876648 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-03-26 02:21:41.876655 | orchestrator | Thursday 26 March 2026 02:20:15 +0000 (0:00:10.243) 0:02:36.766 ******** 2026-03-26 02:21:41.876662 | orchestrator | ok: [testbed-manager] 2026-03-26 02:21:41.876670 | orchestrator | 2026-03-26 02:21:41.876676 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-03-26 02:21:41.876683 | orchestrator | Thursday 26 March 2026 02:20:16 +0000 (0:00:01.053) 0:02:37.820 ******** 2026-03-26 02:21:41.876690 | orchestrator | changed: [testbed-manager] 2026-03-26 02:21:41.876697 | orchestrator | 2026-03-26 02:21:41.876704 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-03-26 02:21:41.876710 | orchestrator | Thursday 26 March 2026 02:20:16 +0000 (0:00:00.471) 0:02:38.291 ******** 2026-03-26 02:21:41.876717 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-03-26 02:21:41.876724 | orchestrator | 2026-03-26 02:21:41.876730 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-03-26 02:21:41.876737 | orchestrator | Thursday 26 March 2026 02:20:17 +0000 (0:00:00.556) 0:02:38.848 ******** 2026-03-26 02:21:41.876744 | orchestrator | changed: [testbed-manager] 2026-03-26 02:21:41.876750 | orchestrator | 2026-03-26 02:21:41.876757 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-03-26 02:21:41.876764 | orchestrator | Thursday 26 March 2026 02:20:18 +0000 (0:00:00.897) 0:02:39.745 ******** 2026-03-26 02:21:41.876771 | orchestrator | changed: [testbed-manager] 2026-03-26 02:21:41.876777 | orchestrator | 2026-03-26 02:21:41.876784 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-03-26 02:21:41.876791 | orchestrator | Thursday 26 March 2026 02:20:18 +0000 (0:00:00.645) 0:02:40.390 ******** 2026-03-26 02:21:41.876798 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-26 02:21:41.876805 | orchestrator | 2026-03-26 02:21:41.876816 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-03-26 02:21:41.876827 | orchestrator | Thursday 26 March 2026 02:20:20 +0000 (0:00:01.629) 0:02:42.020 ******** 2026-03-26 02:21:41.876838 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-26 02:21:41.876850 | orchestrator | 2026-03-26 02:21:41.876952 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-03-26 02:21:41.877044 | orchestrator | Thursday 26 March 2026 02:20:21 +0000 (0:00:00.880) 0:02:42.901 ******** 2026-03-26 02:21:41.877055 | orchestrator | changed: [testbed-manager] 2026-03-26 02:21:41.877063 | orchestrator | 2026-03-26 02:21:41.877070 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-03-26 02:21:41.877079 | orchestrator | Thursday 26 March 2026 02:20:21 +0000 (0:00:00.494) 0:02:43.396 ******** 2026-03-26 02:21:41.877087 | orchestrator | changed: [testbed-manager] 2026-03-26 02:21:41.877094 | orchestrator | 2026-03-26 02:21:41.877102 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-03-26 02:21:41.877110 | orchestrator | 2026-03-26 02:21:41.877117 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-03-26 02:21:41.877126 | orchestrator | Thursday 26 March 2026 02:20:22 +0000 (0:00:00.485) 0:02:43.881 ******** 2026-03-26 02:21:41.877134 | orchestrator | ok: [testbed-manager] 2026-03-26 02:21:41.877141 | orchestrator | 2026-03-26 02:21:41.877149 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-03-26 02:21:41.877157 | orchestrator | Thursday 26 March 2026 02:20:22 +0000 (0:00:00.393) 0:02:44.275 ******** 2026-03-26 02:21:41.877165 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-03-26 02:21:41.877173 | orchestrator | 2026-03-26 02:21:41.877181 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-03-26 02:21:41.877189 | orchestrator | Thursday 26 March 2026 02:20:23 +0000 (0:00:00.262) 0:02:44.537 ******** 2026-03-26 02:21:41.877196 | orchestrator | ok: [testbed-manager] 2026-03-26 02:21:41.877204 | orchestrator | 2026-03-26 02:21:41.877222 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-03-26 02:21:41.877229 | orchestrator | Thursday 26 March 2026 02:20:23 +0000 (0:00:00.867) 0:02:45.405 ******** 2026-03-26 02:21:41.877236 | orchestrator | ok: [testbed-manager] 2026-03-26 02:21:41.877243 | orchestrator | 2026-03-26 02:21:41.877266 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-03-26 02:21:41.877273 | orchestrator | Thursday 26 March 2026 02:20:25 +0000 (0:00:01.679) 0:02:47.085 ******** 2026-03-26 02:21:41.877280 | orchestrator | changed: [testbed-manager] 2026-03-26 02:21:41.877287 | orchestrator | 2026-03-26 02:21:41.877294 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-03-26 02:21:41.877301 | orchestrator | Thursday 26 March 2026 02:20:26 +0000 (0:00:00.838) 0:02:47.923 ******** 2026-03-26 02:21:41.877307 | orchestrator | ok: [testbed-manager] 2026-03-26 02:21:41.877314 | orchestrator | 2026-03-26 02:21:41.877321 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-03-26 02:21:41.877328 | orchestrator | Thursday 26 March 2026 02:20:26 +0000 (0:00:00.493) 0:02:48.416 ******** 2026-03-26 02:21:41.877334 | orchestrator | changed: [testbed-manager] 2026-03-26 02:21:41.877341 | orchestrator | 2026-03-26 02:21:41.877348 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-03-26 02:21:41.877355 | orchestrator | Thursday 26 March 2026 02:20:34 +0000 (0:00:07.985) 0:02:56.402 ******** 2026-03-26 02:21:41.877361 | orchestrator | changed: [testbed-manager] 2026-03-26 02:21:41.877368 | orchestrator | 2026-03-26 02:21:41.877375 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-03-26 02:21:41.877382 | orchestrator | Thursday 26 March 2026 02:20:47 +0000 (0:00:12.989) 0:03:09.392 ******** 2026-03-26 02:21:41.877388 | orchestrator | ok: [testbed-manager] 2026-03-26 02:21:41.877395 | orchestrator | 2026-03-26 02:21:41.877402 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-03-26 02:21:41.877409 | orchestrator | 2026-03-26 02:21:41.877415 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-03-26 02:21:41.877422 | orchestrator | Thursday 26 March 2026 02:20:48 +0000 (0:00:00.817) 0:03:10.209 ******** 2026-03-26 02:21:41.877429 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:21:41.877436 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:21:41.877443 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:21:41.877449 | orchestrator | 2026-03-26 02:21:41.877456 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-03-26 02:21:41.877463 | orchestrator | Thursday 26 March 2026 02:20:49 +0000 (0:00:00.328) 0:03:10.538 ******** 2026-03-26 02:21:41.877484 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:21:41.877491 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:21:41.877497 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:21:41.877504 | orchestrator | 2026-03-26 02:21:41.877511 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-03-26 02:21:41.877517 | orchestrator | Thursday 26 March 2026 02:20:49 +0000 (0:00:00.326) 0:03:10.865 ******** 2026-03-26 02:21:41.877524 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 02:21:41.877531 | orchestrator | 2026-03-26 02:21:41.877538 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-03-26 02:21:41.877545 | orchestrator | Thursday 26 March 2026 02:20:50 +0000 (0:00:00.775) 0:03:11.640 ******** 2026-03-26 02:21:41.877552 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-26 02:21:41.877558 | orchestrator | 2026-03-26 02:21:41.877565 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-03-26 02:21:41.877572 | orchestrator | Thursday 26 March 2026 02:20:51 +0000 (0:00:00.915) 0:03:12.556 ******** 2026-03-26 02:21:41.877579 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-26 02:21:41.877585 | orchestrator | 2026-03-26 02:21:41.877592 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-03-26 02:21:41.877604 | orchestrator | Thursday 26 March 2026 02:20:52 +0000 (0:00:01.096) 0:03:13.653 ******** 2026-03-26 02:21:41.877610 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:21:41.877617 | orchestrator | 2026-03-26 02:21:41.877624 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-03-26 02:21:41.877631 | orchestrator | Thursday 26 March 2026 02:20:52 +0000 (0:00:00.134) 0:03:13.787 ******** 2026-03-26 02:21:41.877637 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-26 02:21:41.877644 | orchestrator | 2026-03-26 02:21:41.877651 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-03-26 02:21:41.877657 | orchestrator | Thursday 26 March 2026 02:20:53 +0000 (0:00:01.046) 0:03:14.834 ******** 2026-03-26 02:21:41.877664 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:21:41.877671 | orchestrator | 2026-03-26 02:21:41.877678 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-03-26 02:21:41.877684 | orchestrator | Thursday 26 March 2026 02:20:53 +0000 (0:00:00.142) 0:03:14.976 ******** 2026-03-26 02:21:41.877691 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:21:41.877698 | orchestrator | 2026-03-26 02:21:41.877705 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-03-26 02:21:41.877711 | orchestrator | Thursday 26 March 2026 02:20:53 +0000 (0:00:00.115) 0:03:15.092 ******** 2026-03-26 02:21:41.877718 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:21:41.877728 | orchestrator | 2026-03-26 02:21:41.877739 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-03-26 02:21:41.877756 | orchestrator | Thursday 26 March 2026 02:20:53 +0000 (0:00:00.131) 0:03:15.224 ******** 2026-03-26 02:21:41.877768 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:21:41.877779 | orchestrator | 2026-03-26 02:21:41.877791 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-03-26 02:21:41.877801 | orchestrator | Thursday 26 March 2026 02:20:53 +0000 (0:00:00.122) 0:03:15.346 ******** 2026-03-26 02:21:41.877812 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-26 02:21:41.877823 | orchestrator | 2026-03-26 02:21:41.877835 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-03-26 02:21:41.877847 | orchestrator | Thursday 26 March 2026 02:20:59 +0000 (0:00:05.567) 0:03:20.914 ******** 2026-03-26 02:21:41.877857 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-03-26 02:21:41.877869 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2026-03-26 02:21:41.877889 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-03-26 02:22:05.560138 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-03-26 02:22:05.560244 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-03-26 02:22:05.560257 | orchestrator | 2026-03-26 02:22:05.560268 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-03-26 02:22:05.560278 | orchestrator | Thursday 26 March 2026 02:21:41 +0000 (0:00:42.445) 0:04:03.359 ******** 2026-03-26 02:22:05.560287 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-26 02:22:05.560296 | orchestrator | 2026-03-26 02:22:05.560305 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-03-26 02:22:05.560315 | orchestrator | Thursday 26 March 2026 02:21:43 +0000 (0:00:01.296) 0:04:04.656 ******** 2026-03-26 02:22:05.560324 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-26 02:22:05.560333 | orchestrator | 2026-03-26 02:22:05.560342 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-03-26 02:22:05.560351 | orchestrator | Thursday 26 March 2026 02:21:44 +0000 (0:00:01.828) 0:04:06.484 ******** 2026-03-26 02:22:05.560360 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-26 02:22:05.560369 | orchestrator | 2026-03-26 02:22:05.560378 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-03-26 02:22:05.560388 | orchestrator | Thursday 26 March 2026 02:21:46 +0000 (0:00:01.154) 0:04:07.639 ******** 2026-03-26 02:22:05.560421 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:22:05.560431 | orchestrator | 2026-03-26 02:22:05.560440 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-03-26 02:22:05.560449 | orchestrator | Thursday 26 March 2026 02:21:46 +0000 (0:00:00.135) 0:04:07.774 ******** 2026-03-26 02:22:05.560458 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-03-26 02:22:05.560467 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-03-26 02:22:05.560476 | orchestrator | 2026-03-26 02:22:05.560485 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-03-26 02:22:05.560494 | orchestrator | Thursday 26 March 2026 02:21:48 +0000 (0:00:01.862) 0:04:09.636 ******** 2026-03-26 02:22:05.560502 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:22:05.560511 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:22:05.560520 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:22:05.560529 | orchestrator | 2026-03-26 02:22:05.560537 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-03-26 02:22:05.560546 | orchestrator | Thursday 26 March 2026 02:21:48 +0000 (0:00:00.359) 0:04:09.996 ******** 2026-03-26 02:22:05.560555 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:22:05.560564 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:22:05.560573 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:22:05.560582 | orchestrator | 2026-03-26 02:22:05.560591 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-03-26 02:22:05.560600 | orchestrator | 2026-03-26 02:22:05.560609 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-03-26 02:22:05.560619 | orchestrator | Thursday 26 March 2026 02:21:49 +0000 (0:00:00.850) 0:04:10.847 ******** 2026-03-26 02:22:05.560630 | orchestrator | ok: [testbed-manager] 2026-03-26 02:22:05.560640 | orchestrator | 2026-03-26 02:22:05.560651 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-03-26 02:22:05.560661 | orchestrator | Thursday 26 March 2026 02:21:49 +0000 (0:00:00.388) 0:04:11.235 ******** 2026-03-26 02:22:05.560671 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-03-26 02:22:05.560680 | orchestrator | 2026-03-26 02:22:05.560690 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-03-26 02:22:05.560700 | orchestrator | Thursday 26 March 2026 02:21:49 +0000 (0:00:00.237) 0:04:11.472 ******** 2026-03-26 02:22:05.560710 | orchestrator | changed: [testbed-manager] 2026-03-26 02:22:05.560721 | orchestrator | 2026-03-26 02:22:05.560731 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-03-26 02:22:05.560740 | orchestrator | 2026-03-26 02:22:05.560751 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-03-26 02:22:05.560760 | orchestrator | Thursday 26 March 2026 02:21:55 +0000 (0:00:05.469) 0:04:16.942 ******** 2026-03-26 02:22:05.560770 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:22:05.560780 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:22:05.560790 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:22:05.560800 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:22:05.560810 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:22:05.560820 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:22:05.560830 | orchestrator | 2026-03-26 02:22:05.560840 | orchestrator | TASK [Manage labels] *********************************************************** 2026-03-26 02:22:05.560850 | orchestrator | Thursday 26 March 2026 02:21:56 +0000 (0:00:00.807) 0:04:17.750 ******** 2026-03-26 02:22:05.560861 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-26 02:22:05.560871 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-26 02:22:05.560881 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-26 02:22:05.560892 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-26 02:22:05.560908 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-26 02:22:05.560917 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-26 02:22:05.560927 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-26 02:22:05.560937 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-26 02:22:05.560948 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-26 02:22:05.560972 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-26 02:22:05.560982 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-26 02:22:05.561011 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-26 02:22:05.561020 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-26 02:22:05.561029 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-26 02:22:05.561038 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-26 02:22:05.561060 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-26 02:22:05.561070 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-26 02:22:05.561078 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-26 02:22:05.561087 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-26 02:22:05.561096 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-26 02:22:05.561104 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-26 02:22:05.561113 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-26 02:22:05.561122 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-26 02:22:05.561131 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-26 02:22:05.561140 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-26 02:22:05.561148 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-26 02:22:05.561157 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-26 02:22:05.561166 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-26 02:22:05.561174 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-26 02:22:05.561183 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-26 02:22:05.561192 | orchestrator | 2026-03-26 02:22:05.561201 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-03-26 02:22:05.561209 | orchestrator | Thursday 26 March 2026 02:22:04 +0000 (0:00:07.998) 0:04:25.748 ******** 2026-03-26 02:22:05.561218 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:22:05.561227 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:22:05.561236 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:22:05.561245 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:22:05.561253 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:22:05.561262 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:22:05.561271 | orchestrator | 2026-03-26 02:22:05.561280 | orchestrator | TASK [Manage taints] *********************************************************** 2026-03-26 02:22:05.561289 | orchestrator | Thursday 26 March 2026 02:22:04 +0000 (0:00:00.578) 0:04:26.327 ******** 2026-03-26 02:22:05.561298 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:22:05.561313 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:22:05.561322 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:22:05.561331 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:22:05.561339 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:22:05.561348 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:22:05.561357 | orchestrator | 2026-03-26 02:22:05.561366 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-26 02:22:05.561387 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-26 02:22:05.561399 | orchestrator | testbed-node-0 : ok=50  changed=23  unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-03-26 02:22:05.561408 | orchestrator | testbed-node-1 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-26 02:22:05.561417 | orchestrator | testbed-node-2 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-26 02:22:05.561426 | orchestrator | testbed-node-3 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-26 02:22:05.561435 | orchestrator | testbed-node-4 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-26 02:22:05.561444 | orchestrator | testbed-node-5 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-26 02:22:05.561453 | orchestrator | 2026-03-26 02:22:05.561462 | orchestrator | 2026-03-26 02:22:05.561470 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-26 02:22:05.561479 | orchestrator | Thursday 26 March 2026 02:22:05 +0000 (0:00:00.706) 0:04:27.033 ******** 2026-03-26 02:22:05.561494 | orchestrator | =============================================================================== 2026-03-26 02:22:05.996912 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 54.12s 2026-03-26 02:22:05.997053 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 42.45s 2026-03-26 02:22:05.997069 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 27.41s 2026-03-26 02:22:05.997080 | orchestrator | kubectl : Install required packages ------------------------------------ 12.99s 2026-03-26 02:22:05.997102 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 10.24s 2026-03-26 02:22:05.997112 | orchestrator | Manage labels ----------------------------------------------------------- 8.00s 2026-03-26 02:22:05.997122 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 7.99s 2026-03-26 02:22:05.997132 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 5.57s 2026-03-26 02:22:05.997141 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 5.47s 2026-03-26 02:22:05.997151 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 4.83s 2026-03-26 02:22:05.997161 | orchestrator | k3s_server : Detect Kubernetes version for label compatibility ---------- 3.25s 2026-03-26 02:22:05.997171 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.02s 2026-03-26 02:22:05.997183 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 2.90s 2026-03-26 02:22:05.997193 | orchestrator | k3s_prereq : Enable IPv6 forwarding ------------------------------------- 2.15s 2026-03-26 02:22:05.997202 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.01s 2026-03-26 02:22:05.997212 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 1.86s 2026-03-26 02:22:05.997222 | orchestrator | k3s_server_post : Copy BGP manifests to first master -------------------- 1.83s 2026-03-26 02:22:05.997257 | orchestrator | k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers --- 1.77s 2026-03-26 02:22:05.997268 | orchestrator | kubectl : Install apt-transport-https package --------------------------- 1.68s 2026-03-26 02:22:05.997278 | orchestrator | k3s_server : Create custom resolv.conf for k3s -------------------------- 1.63s 2026-03-26 02:22:06.347702 | orchestrator | + osism apply copy-kubeconfig 2026-03-26 02:22:18.475872 | orchestrator | 2026-03-26 02:22:18 | INFO  | Task 33d65d64-967a-44bc-a1e2-7e6dd84a9f9b (copy-kubeconfig) was prepared for execution. 2026-03-26 02:22:18.475978 | orchestrator | 2026-03-26 02:22:18 | INFO  | It takes a moment until task 33d65d64-967a-44bc-a1e2-7e6dd84a9f9b (copy-kubeconfig) has been started and output is visible here. 2026-03-26 02:22:25.752307 | orchestrator | 2026-03-26 02:22:25.752440 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2026-03-26 02:22:25.752462 | orchestrator | 2026-03-26 02:22:25.752475 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-03-26 02:22:25.752487 | orchestrator | Thursday 26 March 2026 02:22:22 +0000 (0:00:00.158) 0:00:00.158 ******** 2026-03-26 02:22:25.752498 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-03-26 02:22:25.752510 | orchestrator | 2026-03-26 02:22:25.752521 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-03-26 02:22:25.752552 | orchestrator | Thursday 26 March 2026 02:22:23 +0000 (0:00:00.792) 0:00:00.950 ******** 2026-03-26 02:22:25.752564 | orchestrator | changed: [testbed-manager] 2026-03-26 02:22:25.752577 | orchestrator | 2026-03-26 02:22:25.752589 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2026-03-26 02:22:25.752600 | orchestrator | Thursday 26 March 2026 02:22:24 +0000 (0:00:01.278) 0:00:02.229 ******** 2026-03-26 02:22:25.752616 | orchestrator | changed: [testbed-manager] 2026-03-26 02:22:25.752628 | orchestrator | 2026-03-26 02:22:25.752644 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-26 02:22:25.752656 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-26 02:22:25.752668 | orchestrator | 2026-03-26 02:22:25.752680 | orchestrator | 2026-03-26 02:22:25.752691 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-26 02:22:25.752702 | orchestrator | Thursday 26 March 2026 02:22:25 +0000 (0:00:00.497) 0:00:02.727 ******** 2026-03-26 02:22:25.752713 | orchestrator | =============================================================================== 2026-03-26 02:22:25.752724 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.28s 2026-03-26 02:22:25.752736 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.79s 2026-03-26 02:22:25.752747 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.50s 2026-03-26 02:22:26.087362 | orchestrator | + sh -c /opt/configuration/scripts/deploy/200-infrastructure.sh 2026-03-26 02:22:38.251254 | orchestrator | 2026-03-26 02:22:38 | INFO  | Task 8310beb1-1a99-4254-99a6-2a442c4f14cd (openstackclient) was prepared for execution. 2026-03-26 02:22:38.251361 | orchestrator | 2026-03-26 02:22:38 | INFO  | It takes a moment until task 8310beb1-1a99-4254-99a6-2a442c4f14cd (openstackclient) has been started and output is visible here. 2026-03-26 02:23:27.157537 | orchestrator | 2026-03-26 02:23:27.157659 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-03-26 02:23:27.157674 | orchestrator | 2026-03-26 02:23:27.157682 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-03-26 02:23:27.157690 | orchestrator | Thursday 26 March 2026 02:22:42 +0000 (0:00:00.237) 0:00:00.237 ******** 2026-03-26 02:23:27.157699 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-03-26 02:23:27.157720 | orchestrator | 2026-03-26 02:23:27.158610 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-03-26 02:23:27.158636 | orchestrator | Thursday 26 March 2026 02:22:43 +0000 (0:00:00.248) 0:00:00.485 ******** 2026-03-26 02:23:27.158646 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-03-26 02:23:27.158655 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2026-03-26 02:23:27.158664 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-03-26 02:23:27.158672 | orchestrator | 2026-03-26 02:23:27.158680 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-03-26 02:23:27.158687 | orchestrator | Thursday 26 March 2026 02:22:44 +0000 (0:00:01.298) 0:00:01.784 ******** 2026-03-26 02:23:27.158695 | orchestrator | changed: [testbed-manager] 2026-03-26 02:23:27.158702 | orchestrator | 2026-03-26 02:23:27.158709 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-03-26 02:23:27.158717 | orchestrator | Thursday 26 March 2026 02:22:45 +0000 (0:00:01.495) 0:00:03.279 ******** 2026-03-26 02:23:27.158724 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2026-03-26 02:23:27.158732 | orchestrator | ok: [testbed-manager] 2026-03-26 02:23:27.158740 | orchestrator | 2026-03-26 02:23:27.158747 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-03-26 02:23:27.158754 | orchestrator | Thursday 26 March 2026 02:23:21 +0000 (0:00:35.575) 0:00:38.855 ******** 2026-03-26 02:23:27.158762 | orchestrator | changed: [testbed-manager] 2026-03-26 02:23:27.158769 | orchestrator | 2026-03-26 02:23:27.158776 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-03-26 02:23:27.158784 | orchestrator | Thursday 26 March 2026 02:23:22 +0000 (0:00:00.990) 0:00:39.845 ******** 2026-03-26 02:23:27.158791 | orchestrator | ok: [testbed-manager] 2026-03-26 02:23:27.158798 | orchestrator | 2026-03-26 02:23:27.158805 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-03-26 02:23:27.158813 | orchestrator | Thursday 26 March 2026 02:23:23 +0000 (0:00:00.734) 0:00:40.579 ******** 2026-03-26 02:23:27.158820 | orchestrator | changed: [testbed-manager] 2026-03-26 02:23:27.158827 | orchestrator | 2026-03-26 02:23:27.158835 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-03-26 02:23:27.158843 | orchestrator | Thursday 26 March 2026 02:23:24 +0000 (0:00:01.728) 0:00:42.308 ******** 2026-03-26 02:23:27.158850 | orchestrator | changed: [testbed-manager] 2026-03-26 02:23:27.158867 | orchestrator | 2026-03-26 02:23:27.158874 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-03-26 02:23:27.158882 | orchestrator | Thursday 26 March 2026 02:23:25 +0000 (0:00:00.772) 0:00:43.081 ******** 2026-03-26 02:23:27.158889 | orchestrator | changed: [testbed-manager] 2026-03-26 02:23:27.158896 | orchestrator | 2026-03-26 02:23:27.158903 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-03-26 02:23:27.158911 | orchestrator | Thursday 26 March 2026 02:23:26 +0000 (0:00:00.678) 0:00:43.760 ******** 2026-03-26 02:23:27.158918 | orchestrator | ok: [testbed-manager] 2026-03-26 02:23:27.158925 | orchestrator | 2026-03-26 02:23:27.158932 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-26 02:23:27.158940 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-26 02:23:27.158948 | orchestrator | 2026-03-26 02:23:27.158955 | orchestrator | 2026-03-26 02:23:27.158963 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-26 02:23:27.158970 | orchestrator | Thursday 26 March 2026 02:23:26 +0000 (0:00:00.423) 0:00:44.183 ******** 2026-03-26 02:23:27.158977 | orchestrator | =============================================================================== 2026-03-26 02:23:27.158985 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 35.58s 2026-03-26 02:23:27.158992 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 1.73s 2026-03-26 02:23:27.159005 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 1.50s 2026-03-26 02:23:27.159013 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.30s 2026-03-26 02:23:27.159040 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 0.99s 2026-03-26 02:23:27.159049 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 0.77s 2026-03-26 02:23:27.159060 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.73s 2026-03-26 02:23:27.159072 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.68s 2026-03-26 02:23:27.159085 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.42s 2026-03-26 02:23:27.159097 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.25s 2026-03-26 02:23:29.848338 | orchestrator | 2026-03-26 02:23:29 | INFO  | Task a0a4ffc3-3e59-4deb-90a6-de603effbb9d (common) was prepared for execution. 2026-03-26 02:23:29.848424 | orchestrator | 2026-03-26 02:23:29 | INFO  | It takes a moment until task a0a4ffc3-3e59-4deb-90a6-de603effbb9d (common) has been started and output is visible here. 2026-03-26 02:23:42.558969 | orchestrator | 2026-03-26 02:23:42.559065 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-03-26 02:23:42.559074 | orchestrator | 2026-03-26 02:23:42.559079 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-03-26 02:23:42.559085 | orchestrator | Thursday 26 March 2026 02:23:34 +0000 (0:00:00.291) 0:00:00.291 ******** 2026-03-26 02:23:42.559090 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-26 02:23:42.559096 | orchestrator | 2026-03-26 02:23:42.559101 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-03-26 02:23:42.559106 | orchestrator | Thursday 26 March 2026 02:23:35 +0000 (0:00:01.481) 0:00:01.773 ******** 2026-03-26 02:23:42.559110 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-26 02:23:42.559115 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-26 02:23:42.559120 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-26 02:23:42.559125 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-26 02:23:42.559129 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-26 02:23:42.559134 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-26 02:23:42.559139 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-26 02:23:42.559143 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-26 02:23:42.559161 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-26 02:23:42.559167 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-26 02:23:42.559172 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-26 02:23:42.559176 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-26 02:23:42.559181 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-26 02:23:42.559186 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-26 02:23:42.559190 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-26 02:23:42.559195 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-26 02:23:42.559200 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-26 02:23:42.559221 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-26 02:23:42.559226 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-26 02:23:42.559230 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-26 02:23:42.559235 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-26 02:23:42.559240 | orchestrator | 2026-03-26 02:23:42.559244 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-03-26 02:23:42.559249 | orchestrator | Thursday 26 March 2026 02:23:38 +0000 (0:00:02.602) 0:00:04.375 ******** 2026-03-26 02:23:42.559254 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-26 02:23:42.559260 | orchestrator | 2026-03-26 02:23:42.559265 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-03-26 02:23:42.559272 | orchestrator | Thursday 26 March 2026 02:23:39 +0000 (0:00:01.386) 0:00:05.761 ******** 2026-03-26 02:23:42.559279 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-26 02:23:42.559286 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-26 02:23:42.559305 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-26 02:23:42.559311 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-26 02:23:42.559316 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-26 02:23:42.559321 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-26 02:23:42.559330 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-26 02:23:42.559335 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:23:42.559340 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:23:42.559349 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:23:43.529714 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:23:43.529807 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:23:43.529838 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:23:43.529849 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:23:43.529859 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:23:43.529882 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:23:43.529892 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:23:43.529925 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:23:43.529935 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:23:43.529944 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:23:43.529958 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:23:43.529969 | orchestrator | 2026-03-26 02:23:43.529986 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-03-26 02:23:43.530002 | orchestrator | Thursday 26 March 2026 02:23:43 +0000 (0:00:03.494) 0:00:09.256 ******** 2026-03-26 02:23:43.530132 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-26 02:23:43.530153 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 02:23:43.530169 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 02:23:43.530186 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-26 02:23:43.530220 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 02:23:44.128752 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 02:23:44.128869 | orchestrator | skipping: [testbed-manager] 2026-03-26 02:23:44.128930 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-26 02:23:44.128946 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 02:23:44.128957 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 02:23:44.128967 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:23:44.128978 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-26 02:23:44.129000 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 02:23:44.129011 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 02:23:44.129022 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:23:44.129104 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-26 02:23:44.129125 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 02:23:44.129135 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 02:23:44.129145 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:23:44.129155 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:23:44.129165 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-26 02:23:44.129175 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 02:23:44.129185 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 02:23:44.129195 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:23:44.129206 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-26 02:23:44.129224 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 02:23:44.936206 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 02:23:44.936298 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:23:44.936312 | orchestrator | 2026-03-26 02:23:44.936322 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-03-26 02:23:44.936333 | orchestrator | Thursday 26 March 2026 02:23:44 +0000 (0:00:00.918) 0:00:10.174 ******** 2026-03-26 02:23:44.936344 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-26 02:23:44.936355 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 02:23:44.936365 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 02:23:44.936375 | orchestrator | skipping: [testbed-manager] 2026-03-26 02:23:44.936401 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-26 02:23:44.936417 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-26 02:23:44.936448 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 02:23:44.936474 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 02:23:44.936484 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 02:23:44.936493 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 02:23:44.936502 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:23:44.936511 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-26 02:23:44.936520 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 02:23:44.936534 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 02:23:44.936544 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-26 02:23:44.936565 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 02:23:50.527843 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 02:23:50.527958 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:23:50.527978 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:23:50.527991 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:23:50.528005 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-26 02:23:50.528020 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 02:23:50.528059 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 02:23:50.528072 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:23:50.528085 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-26 02:23:50.528124 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 02:23:50.528137 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 02:23:50.528150 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:23:50.528162 | orchestrator | 2026-03-26 02:23:50.528175 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-03-26 02:23:50.528189 | orchestrator | Thursday 26 March 2026 02:23:46 +0000 (0:00:02.013) 0:00:12.188 ******** 2026-03-26 02:23:50.528201 | orchestrator | skipping: [testbed-manager] 2026-03-26 02:23:50.528212 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:23:50.528253 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:23:50.528266 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:23:50.528277 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:23:50.528289 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:23:50.528300 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:23:50.528313 | orchestrator | 2026-03-26 02:23:50.528325 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-03-26 02:23:50.528338 | orchestrator | Thursday 26 March 2026 02:23:46 +0000 (0:00:00.719) 0:00:12.907 ******** 2026-03-26 02:23:50.528351 | orchestrator | skipping: [testbed-manager] 2026-03-26 02:23:50.528364 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:23:50.528377 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:23:50.528390 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:23:50.528403 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:23:50.528417 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:23:50.528431 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:23:50.528444 | orchestrator | 2026-03-26 02:23:50.528458 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-03-26 02:23:50.528471 | orchestrator | Thursday 26 March 2026 02:23:47 +0000 (0:00:00.943) 0:00:13.850 ******** 2026-03-26 02:23:50.528487 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-26 02:23:50.528517 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-26 02:23:50.528539 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-26 02:23:50.528553 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-26 02:23:50.528571 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-26 02:23:50.528585 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-26 02:23:50.528604 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-26 02:23:53.376560 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:23:53.376689 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:23:53.376749 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:23:53.376792 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:23:53.376812 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:23:53.376832 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:23:53.376890 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:23:53.376913 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:23:53.376936 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:23:53.376970 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:23:53.376991 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:23:53.377011 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:23:53.377057 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:23:53.377079 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:23:53.377100 | orchestrator | 2026-03-26 02:23:53.377121 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-03-26 02:23:53.377143 | orchestrator | Thursday 26 March 2026 02:23:51 +0000 (0:00:03.512) 0:00:17.363 ******** 2026-03-26 02:23:53.377162 | orchestrator | [WARNING]: Skipped 2026-03-26 02:23:53.377184 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-03-26 02:23:53.377204 | orchestrator | to this access issue: 2026-03-26 02:23:53.377223 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-03-26 02:23:53.377241 | orchestrator | directory 2026-03-26 02:23:53.377262 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-26 02:23:53.377283 | orchestrator | 2026-03-26 02:23:53.377302 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-03-26 02:23:53.377319 | orchestrator | Thursday 26 March 2026 02:23:52 +0000 (0:00:00.993) 0:00:18.357 ******** 2026-03-26 02:23:53.377338 | orchestrator | [WARNING]: Skipped 2026-03-26 02:23:53.377368 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-03-26 02:24:03.657418 | orchestrator | to this access issue: 2026-03-26 02:24:03.657507 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-03-26 02:24:03.657515 | orchestrator | directory 2026-03-26 02:24:03.657520 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-26 02:24:03.657526 | orchestrator | 2026-03-26 02:24:03.657540 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-03-26 02:24:03.657549 | orchestrator | Thursday 26 March 2026 02:23:53 +0000 (0:00:01.351) 0:00:19.709 ******** 2026-03-26 02:24:03.657574 | orchestrator | [WARNING]: Skipped 2026-03-26 02:24:03.657581 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-03-26 02:24:03.657587 | orchestrator | to this access issue: 2026-03-26 02:24:03.657593 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-03-26 02:24:03.657599 | orchestrator | directory 2026-03-26 02:24:03.657604 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-26 02:24:03.657611 | orchestrator | 2026-03-26 02:24:03.657618 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-03-26 02:24:03.657625 | orchestrator | Thursday 26 March 2026 02:23:54 +0000 (0:00:00.865) 0:00:20.574 ******** 2026-03-26 02:24:03.657631 | orchestrator | [WARNING]: Skipped 2026-03-26 02:24:03.657638 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-03-26 02:24:03.657642 | orchestrator | to this access issue: 2026-03-26 02:24:03.657646 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-03-26 02:24:03.657650 | orchestrator | directory 2026-03-26 02:24:03.657654 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-26 02:24:03.657658 | orchestrator | 2026-03-26 02:24:03.657662 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-03-26 02:24:03.657666 | orchestrator | Thursday 26 March 2026 02:23:55 +0000 (0:00:00.949) 0:00:21.524 ******** 2026-03-26 02:24:03.657670 | orchestrator | changed: [testbed-manager] 2026-03-26 02:24:03.657674 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:24:03.657678 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:24:03.657681 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:24:03.657685 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:24:03.657689 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:24:03.657705 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:24:03.657709 | orchestrator | 2026-03-26 02:24:03.657713 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-03-26 02:24:03.657717 | orchestrator | Thursday 26 March 2026 02:23:58 +0000 (0:00:02.586) 0:00:24.110 ******** 2026-03-26 02:24:03.657721 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-26 02:24:03.657726 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-26 02:24:03.657730 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-26 02:24:03.657734 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-26 02:24:03.657738 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-26 02:24:03.657742 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-26 02:24:03.657746 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-26 02:24:03.657749 | orchestrator | 2026-03-26 02:24:03.657758 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-03-26 02:24:03.657762 | orchestrator | Thursday 26 March 2026 02:24:00 +0000 (0:00:02.216) 0:00:26.326 ******** 2026-03-26 02:24:03.657765 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:24:03.657769 | orchestrator | changed: [testbed-manager] 2026-03-26 02:24:03.657773 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:24:03.657777 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:24:03.657781 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:24:03.657784 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:24:03.657788 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:24:03.657792 | orchestrator | 2026-03-26 02:24:03.657796 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-03-26 02:24:03.657804 | orchestrator | Thursday 26 March 2026 02:24:02 +0000 (0:00:02.051) 0:00:28.377 ******** 2026-03-26 02:24:03.657810 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-26 02:24:03.657830 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 02:24:03.657838 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-26 02:24:03.657845 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 02:24:03.657851 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-26 02:24:03.657857 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 02:24:03.657867 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-26 02:24:03.657886 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:24:03.657901 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 02:24:03.657914 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:24:09.517317 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-26 02:24:09.517432 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 02:24:09.517449 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:24:09.517482 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-26 02:24:09.517495 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 02:24:09.517529 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-26 02:24:09.517542 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 02:24:09.517571 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:24:09.517585 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:24:09.517596 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:24:09.517608 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:24:09.517621 | orchestrator | 2026-03-26 02:24:09.517634 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-03-26 02:24:09.517647 | orchestrator | Thursday 26 March 2026 02:24:03 +0000 (0:00:01.523) 0:00:29.901 ******** 2026-03-26 02:24:09.517658 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-26 02:24:09.517670 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-26 02:24:09.517689 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-26 02:24:09.517700 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-26 02:24:09.517711 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-26 02:24:09.517722 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-26 02:24:09.517733 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-26 02:24:09.517744 | orchestrator | 2026-03-26 02:24:09.517755 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-03-26 02:24:09.517766 | orchestrator | Thursday 26 March 2026 02:24:05 +0000 (0:00:01.957) 0:00:31.859 ******** 2026-03-26 02:24:09.517777 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-26 02:24:09.517789 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-26 02:24:09.517800 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-26 02:24:09.517818 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-26 02:24:09.517831 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-26 02:24:09.517843 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-26 02:24:09.517856 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-26 02:24:09.517868 | orchestrator | 2026-03-26 02:24:09.517881 | orchestrator | TASK [common : Check common containers] **************************************** 2026-03-26 02:24:09.517894 | orchestrator | Thursday 26 March 2026 02:24:07 +0000 (0:00:01.692) 0:00:33.552 ******** 2026-03-26 02:24:09.517907 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-26 02:24:09.517929 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-26 02:24:10.119399 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-26 02:24:10.119491 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-26 02:24:10.119528 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-26 02:24:10.119552 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-26 02:24:10.119563 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:24:10.119573 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-26 02:24:10.119582 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:24:10.119608 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:24:10.119618 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:24:10.119633 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:24:10.119647 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:24:10.119658 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:24:10.119669 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:24:10.119679 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:24:10.119696 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:25:35.993012 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:25:35.993254 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:25:35.993286 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:25:35.993331 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:25:35.993358 | orchestrator | 2026-03-26 02:25:35.993379 | orchestrator | TASK [common : Creating log volume] ******************************************** 2026-03-26 02:25:35.993469 | orchestrator | Thursday 26 March 2026 02:24:10 +0000 (0:00:02.613) 0:00:36.166 ******** 2026-03-26 02:25:35.993490 | orchestrator | changed: [testbed-manager] 2026-03-26 02:25:35.993510 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:25:35.993530 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:25:35.993547 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:25:35.993562 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:25:35.993573 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:25:35.993584 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:25:35.993595 | orchestrator | 2026-03-26 02:25:35.993606 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2026-03-26 02:25:35.993617 | orchestrator | Thursday 26 March 2026 02:24:11 +0000 (0:00:01.411) 0:00:37.577 ******** 2026-03-26 02:25:35.993628 | orchestrator | changed: [testbed-manager] 2026-03-26 02:25:35.993639 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:25:35.993649 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:25:35.993660 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:25:35.993671 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:25:35.993681 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:25:35.993692 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:25:35.993703 | orchestrator | 2026-03-26 02:25:35.993714 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-26 02:25:35.993725 | orchestrator | Thursday 26 March 2026 02:24:12 +0000 (0:00:01.071) 0:00:38.648 ******** 2026-03-26 02:25:35.993736 | orchestrator | 2026-03-26 02:25:35.993747 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-26 02:25:35.993758 | orchestrator | Thursday 26 March 2026 02:24:12 +0000 (0:00:00.065) 0:00:38.713 ******** 2026-03-26 02:25:35.993768 | orchestrator | 2026-03-26 02:25:35.993779 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-26 02:25:35.993790 | orchestrator | Thursday 26 March 2026 02:24:12 +0000 (0:00:00.066) 0:00:38.780 ******** 2026-03-26 02:25:35.993849 | orchestrator | 2026-03-26 02:25:35.993863 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-26 02:25:35.993874 | orchestrator | Thursday 26 March 2026 02:24:12 +0000 (0:00:00.088) 0:00:38.869 ******** 2026-03-26 02:25:35.993885 | orchestrator | 2026-03-26 02:25:35.993896 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-26 02:25:35.993969 | orchestrator | Thursday 26 March 2026 02:24:13 +0000 (0:00:00.252) 0:00:39.121 ******** 2026-03-26 02:25:35.993998 | orchestrator | 2026-03-26 02:25:35.994146 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-26 02:25:35.994174 | orchestrator | Thursday 26 March 2026 02:24:13 +0000 (0:00:00.067) 0:00:39.188 ******** 2026-03-26 02:25:35.994191 | orchestrator | 2026-03-26 02:25:35.994208 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-26 02:25:35.994226 | orchestrator | Thursday 26 March 2026 02:24:13 +0000 (0:00:00.059) 0:00:39.248 ******** 2026-03-26 02:25:35.994244 | orchestrator | 2026-03-26 02:25:35.994262 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-03-26 02:25:35.994279 | orchestrator | Thursday 26 March 2026 02:24:13 +0000 (0:00:00.090) 0:00:39.338 ******** 2026-03-26 02:25:35.994297 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:25:35.994316 | orchestrator | changed: [testbed-manager] 2026-03-26 02:25:35.994333 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:25:35.994350 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:25:35.994368 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:25:35.994418 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:25:35.994438 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:25:35.994456 | orchestrator | 2026-03-26 02:25:35.994476 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-03-26 02:25:35.994495 | orchestrator | Thursday 26 March 2026 02:24:53 +0000 (0:00:40.562) 0:01:19.900 ******** 2026-03-26 02:25:35.994513 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:25:35.994531 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:25:35.994542 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:25:35.994553 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:25:35.994564 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:25:35.994574 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:25:35.994585 | orchestrator | changed: [testbed-manager] 2026-03-26 02:25:35.994596 | orchestrator | 2026-03-26 02:25:35.994609 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-03-26 02:25:35.994629 | orchestrator | Thursday 26 March 2026 02:25:25 +0000 (0:00:31.651) 0:01:51.552 ******** 2026-03-26 02:25:35.994647 | orchestrator | ok: [testbed-manager] 2026-03-26 02:25:35.994665 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:25:35.994683 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:25:35.994698 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:25:35.994715 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:25:35.994733 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:25:35.994752 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:25:35.994769 | orchestrator | 2026-03-26 02:25:35.994789 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-03-26 02:25:35.994809 | orchestrator | Thursday 26 March 2026 02:25:27 +0000 (0:00:01.878) 0:01:53.430 ******** 2026-03-26 02:25:35.994827 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:25:35.994846 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:25:35.994858 | orchestrator | changed: [testbed-manager] 2026-03-26 02:25:35.994869 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:25:35.994880 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:25:35.994890 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:25:35.994901 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:25:35.994912 | orchestrator | 2026-03-26 02:25:35.994923 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-26 02:25:35.994935 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-26 02:25:35.994948 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-26 02:25:35.994974 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-26 02:25:35.995003 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-26 02:25:35.995014 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-26 02:25:35.995025 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-26 02:25:35.995036 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-26 02:25:35.995047 | orchestrator | 2026-03-26 02:25:35.995059 | orchestrator | 2026-03-26 02:25:35.995070 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-26 02:25:35.995123 | orchestrator | Thursday 26 March 2026 02:25:35 +0000 (0:00:08.580) 0:02:02.011 ******** 2026-03-26 02:25:35.995137 | orchestrator | =============================================================================== 2026-03-26 02:25:35.995148 | orchestrator | common : Restart fluentd container ------------------------------------- 40.56s 2026-03-26 02:25:35.995159 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 31.65s 2026-03-26 02:25:35.995169 | orchestrator | common : Restart cron container ----------------------------------------- 8.58s 2026-03-26 02:25:35.995181 | orchestrator | common : Copying over config.json files for services -------------------- 3.51s 2026-03-26 02:25:35.995191 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 3.49s 2026-03-26 02:25:35.995203 | orchestrator | common : Check common containers ---------------------------------------- 2.61s 2026-03-26 02:25:35.995213 | orchestrator | common : Ensuring config directories exist ------------------------------ 2.60s 2026-03-26 02:25:35.995224 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 2.59s 2026-03-26 02:25:35.995235 | orchestrator | common : Copying over cron logrotate config file ------------------------ 2.22s 2026-03-26 02:25:35.995247 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 2.05s 2026-03-26 02:25:35.995257 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 2.01s 2026-03-26 02:25:35.995268 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 1.96s 2026-03-26 02:25:35.995279 | orchestrator | common : Initializing toolbox container using normal user --------------- 1.88s 2026-03-26 02:25:35.995290 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 1.69s 2026-03-26 02:25:35.995301 | orchestrator | common : Ensuring config directories have correct owner and permission --- 1.52s 2026-03-26 02:25:35.995312 | orchestrator | common : include_tasks -------------------------------------------------- 1.48s 2026-03-26 02:25:35.995336 | orchestrator | common : Creating log volume -------------------------------------------- 1.41s 2026-03-26 02:25:36.446429 | orchestrator | common : include_tasks -------------------------------------------------- 1.39s 2026-03-26 02:25:36.446556 | orchestrator | common : Find custom fluentd filter config files ------------------------ 1.35s 2026-03-26 02:25:36.446574 | orchestrator | common : Link kolla_logs volume to /var/log/kolla ----------------------- 1.07s 2026-03-26 02:25:38.923900 | orchestrator | 2026-03-26 02:25:38 | INFO  | Task 69ee4675-db84-4d53-8664-3bcc753c27a0 (loadbalancer) was prepared for execution. 2026-03-26 02:25:38.924029 | orchestrator | 2026-03-26 02:25:38 | INFO  | It takes a moment until task 69ee4675-db84-4d53-8664-3bcc753c27a0 (loadbalancer) has been started and output is visible here. 2026-03-26 02:25:53.220579 | orchestrator | 2026-03-26 02:25:53.220729 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-26 02:25:53.220749 | orchestrator | 2026-03-26 02:25:53.220762 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-26 02:25:53.220774 | orchestrator | Thursday 26 March 2026 02:25:43 +0000 (0:00:00.271) 0:00:00.271 ******** 2026-03-26 02:25:53.220813 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:25:53.220827 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:25:53.220838 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:25:53.220849 | orchestrator | 2026-03-26 02:25:53.220860 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-26 02:25:53.220871 | orchestrator | Thursday 26 March 2026 02:25:43 +0000 (0:00:00.310) 0:00:00.582 ******** 2026-03-26 02:25:53.220883 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-03-26 02:25:53.220894 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-03-26 02:25:53.220905 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-03-26 02:25:53.220916 | orchestrator | 2026-03-26 02:25:53.220927 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-03-26 02:25:53.220937 | orchestrator | 2026-03-26 02:25:53.220948 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-03-26 02:25:53.220959 | orchestrator | Thursday 26 March 2026 02:25:44 +0000 (0:00:00.455) 0:00:01.037 ******** 2026-03-26 02:25:53.220986 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 02:25:53.220999 | orchestrator | 2026-03-26 02:25:53.221010 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-03-26 02:25:53.221021 | orchestrator | Thursday 26 March 2026 02:25:44 +0000 (0:00:00.598) 0:00:01.635 ******** 2026-03-26 02:25:53.221031 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:25:53.221042 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:25:53.221053 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:25:53.221064 | orchestrator | 2026-03-26 02:25:53.221075 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-03-26 02:25:53.221116 | orchestrator | Thursday 26 March 2026 02:25:45 +0000 (0:00:00.619) 0:00:02.255 ******** 2026-03-26 02:25:53.221134 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 02:25:53.221146 | orchestrator | 2026-03-26 02:25:53.221159 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-03-26 02:25:53.221172 | orchestrator | Thursday 26 March 2026 02:25:45 +0000 (0:00:00.735) 0:00:02.991 ******** 2026-03-26 02:25:53.221184 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:25:53.221197 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:25:53.221209 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:25:53.221221 | orchestrator | 2026-03-26 02:25:53.221234 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-03-26 02:25:53.221246 | orchestrator | Thursday 26 March 2026 02:25:46 +0000 (0:00:00.601) 0:00:03.593 ******** 2026-03-26 02:25:53.221258 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-26 02:25:53.221271 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-26 02:25:53.221284 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-26 02:25:53.221296 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-26 02:25:53.221308 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-26 02:25:53.221320 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-26 02:25:53.221332 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-26 02:25:53.221352 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-26 02:25:53.221370 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-26 02:25:53.221388 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-26 02:25:53.221420 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-26 02:25:53.221439 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-26 02:25:53.221457 | orchestrator | 2026-03-26 02:25:53.221468 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-26 02:25:53.221479 | orchestrator | Thursday 26 March 2026 02:25:48 +0000 (0:00:02.236) 0:00:05.829 ******** 2026-03-26 02:25:53.221490 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-03-26 02:25:53.221501 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-03-26 02:25:53.221512 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-03-26 02:25:53.221523 | orchestrator | 2026-03-26 02:25:53.221534 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-26 02:25:53.221545 | orchestrator | Thursday 26 March 2026 02:25:49 +0000 (0:00:00.742) 0:00:06.572 ******** 2026-03-26 02:25:53.221556 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-03-26 02:25:53.221567 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-03-26 02:25:53.221578 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-03-26 02:25:53.221588 | orchestrator | 2026-03-26 02:25:53.221599 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-26 02:25:53.221610 | orchestrator | Thursday 26 March 2026 02:25:50 +0000 (0:00:01.251) 0:00:07.823 ******** 2026-03-26 02:25:53.221621 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-03-26 02:25:53.221632 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:25:53.221663 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-03-26 02:25:53.221675 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:25:53.221686 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-03-26 02:25:53.221697 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:25:53.221707 | orchestrator | 2026-03-26 02:25:53.221718 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-03-26 02:25:53.221729 | orchestrator | Thursday 26 March 2026 02:25:51 +0000 (0:00:00.553) 0:00:08.376 ******** 2026-03-26 02:25:53.221743 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-26 02:25:53.221768 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-26 02:25:53.221780 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-26 02:25:53.221798 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-26 02:25:53.221811 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-26 02:25:53.221830 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-26 02:25:58.578882 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-26 02:25:58.579004 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-26 02:25:58.579023 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-26 02:25:58.579036 | orchestrator | 2026-03-26 02:25:58.579049 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-03-26 02:25:58.579062 | orchestrator | Thursday 26 March 2026 02:25:53 +0000 (0:00:01.836) 0:00:10.213 ******** 2026-03-26 02:25:58.579074 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:25:58.579147 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:25:58.579165 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:25:58.579176 | orchestrator | 2026-03-26 02:25:58.579188 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-03-26 02:25:58.579199 | orchestrator | Thursday 26 March 2026 02:25:54 +0000 (0:00:00.910) 0:00:11.124 ******** 2026-03-26 02:25:58.579211 | orchestrator | changed: [testbed-node-0] => (item=users) 2026-03-26 02:25:58.579223 | orchestrator | changed: [testbed-node-1] => (item=users) 2026-03-26 02:25:58.579233 | orchestrator | changed: [testbed-node-2] => (item=users) 2026-03-26 02:25:58.579244 | orchestrator | changed: [testbed-node-0] => (item=rules) 2026-03-26 02:25:58.579255 | orchestrator | changed: [testbed-node-1] => (item=rules) 2026-03-26 02:25:58.579266 | orchestrator | changed: [testbed-node-2] => (item=rules) 2026-03-26 02:25:58.579277 | orchestrator | 2026-03-26 02:25:58.579288 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-03-26 02:25:58.579299 | orchestrator | Thursday 26 March 2026 02:25:55 +0000 (0:00:01.544) 0:00:12.669 ******** 2026-03-26 02:25:58.579310 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:25:58.579321 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:25:58.579331 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:25:58.579342 | orchestrator | 2026-03-26 02:25:58.579353 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-03-26 02:25:58.579364 | orchestrator | Thursday 26 March 2026 02:25:56 +0000 (0:00:00.931) 0:00:13.600 ******** 2026-03-26 02:25:58.579375 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:25:58.579387 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:25:58.579399 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:25:58.579411 | orchestrator | 2026-03-26 02:25:58.579425 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-03-26 02:25:58.579437 | orchestrator | Thursday 26 March 2026 02:25:57 +0000 (0:00:01.366) 0:00:14.967 ******** 2026-03-26 02:25:58.579451 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-26 02:25:58.579484 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-26 02:25:58.579498 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-26 02:25:58.579513 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__5801856146be98cdce07043bf9184b3fff60071a', '__omit_place_holder__5801856146be98cdce07043bf9184b3fff60071a'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-26 02:25:58.579535 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:25:58.579549 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-26 02:25:58.579600 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-26 02:25:58.579614 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-26 02:25:58.579625 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__5801856146be98cdce07043bf9184b3fff60071a', '__omit_place_holder__5801856146be98cdce07043bf9184b3fff60071a'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-26 02:25:58.579637 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:25:58.579657 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-26 02:26:01.411824 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-26 02:26:01.411981 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-26 02:26:01.412001 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__5801856146be98cdce07043bf9184b3fff60071a', '__omit_place_holder__5801856146be98cdce07043bf9184b3fff60071a'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-26 02:26:01.412025 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:26:01.412042 | orchestrator | 2026-03-26 02:26:01.412063 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-03-26 02:26:01.412121 | orchestrator | Thursday 26 March 2026 02:25:58 +0000 (0:00:00.605) 0:00:15.572 ******** 2026-03-26 02:26:01.412144 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-26 02:26:01.412163 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-26 02:26:01.412183 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-26 02:26:01.412266 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-26 02:26:01.412292 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-26 02:26:01.412309 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__5801856146be98cdce07043bf9184b3fff60071a', '__omit_place_holder__5801856146be98cdce07043bf9184b3fff60071a'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-26 02:26:01.412323 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-26 02:26:01.412338 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-26 02:26:01.412351 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__5801856146be98cdce07043bf9184b3fff60071a', '__omit_place_holder__5801856146be98cdce07043bf9184b3fff60071a'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-26 02:26:01.412392 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-26 02:26:10.225901 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-26 02:26:10.225999 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__5801856146be98cdce07043bf9184b3fff60071a', '__omit_place_holder__5801856146be98cdce07043bf9184b3fff60071a'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-26 02:26:10.226013 | orchestrator | 2026-03-26 02:26:10.226062 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-03-26 02:26:10.226073 | orchestrator | Thursday 26 March 2026 02:26:01 +0000 (0:00:02.830) 0:00:18.402 ******** 2026-03-26 02:26:10.226082 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-26 02:26:10.226092 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-26 02:26:10.226132 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-26 02:26:10.226175 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-26 02:26:10.226216 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-26 02:26:10.226226 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-26 02:26:10.226235 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-26 02:26:10.226243 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-26 02:26:10.226252 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-26 02:26:10.226260 | orchestrator | 2026-03-26 02:26:10.226268 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-03-26 02:26:10.226276 | orchestrator | Thursday 26 March 2026 02:26:04 +0000 (0:00:03.217) 0:00:21.620 ******** 2026-03-26 02:26:10.226292 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-26 02:26:10.226301 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-26 02:26:10.226309 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-26 02:26:10.226317 | orchestrator | 2026-03-26 02:26:10.226325 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-03-26 02:26:10.226333 | orchestrator | Thursday 26 March 2026 02:26:06 +0000 (0:00:01.956) 0:00:23.576 ******** 2026-03-26 02:26:10.226341 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-26 02:26:10.226350 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-26 02:26:10.226357 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-26 02:26:10.226365 | orchestrator | 2026-03-26 02:26:10.226373 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-03-26 02:26:10.226381 | orchestrator | Thursday 26 March 2026 02:26:09 +0000 (0:00:03.037) 0:00:26.614 ******** 2026-03-26 02:26:10.226389 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:26:10.226399 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:26:10.226407 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:26:10.226415 | orchestrator | 2026-03-26 02:26:10.226431 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-03-26 02:26:21.863936 | orchestrator | Thursday 26 March 2026 02:26:10 +0000 (0:00:00.609) 0:00:27.223 ******** 2026-03-26 02:26:21.864058 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-26 02:26:21.864086 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-26 02:26:21.864097 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-26 02:26:21.864143 | orchestrator | 2026-03-26 02:26:21.864155 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-03-26 02:26:21.864165 | orchestrator | Thursday 26 March 2026 02:26:12 +0000 (0:00:02.106) 0:00:29.329 ******** 2026-03-26 02:26:21.864177 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-26 02:26:21.864187 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-26 02:26:21.864197 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-26 02:26:21.864207 | orchestrator | 2026-03-26 02:26:21.864217 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-03-26 02:26:21.864227 | orchestrator | Thursday 26 March 2026 02:26:14 +0000 (0:00:02.125) 0:00:31.455 ******** 2026-03-26 02:26:21.864238 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2026-03-26 02:26:21.864249 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2026-03-26 02:26:21.864258 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2026-03-26 02:26:21.864268 | orchestrator | 2026-03-26 02:26:21.864292 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-03-26 02:26:21.864303 | orchestrator | Thursday 26 March 2026 02:26:15 +0000 (0:00:01.462) 0:00:32.918 ******** 2026-03-26 02:26:21.864314 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2026-03-26 02:26:21.864324 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2026-03-26 02:26:21.864334 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2026-03-26 02:26:21.864343 | orchestrator | 2026-03-26 02:26:21.864377 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-03-26 02:26:21.864397 | orchestrator | Thursday 26 March 2026 02:26:17 +0000 (0:00:01.414) 0:00:34.332 ******** 2026-03-26 02:26:21.864416 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 02:26:21.864435 | orchestrator | 2026-03-26 02:26:21.864454 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2026-03-26 02:26:21.864470 | orchestrator | Thursday 26 March 2026 02:26:17 +0000 (0:00:00.556) 0:00:34.889 ******** 2026-03-26 02:26:21.864494 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-26 02:26:21.864520 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-26 02:26:21.864549 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-26 02:26:21.864596 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-26 02:26:21.864611 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-26 02:26:21.864628 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-26 02:26:21.864663 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-26 02:26:21.864683 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-26 02:26:21.864703 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-26 02:26:21.864724 | orchestrator | 2026-03-26 02:26:21.864744 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2026-03-26 02:26:21.864762 | orchestrator | Thursday 26 March 2026 02:26:21 +0000 (0:00:03.351) 0:00:38.240 ******** 2026-03-26 02:26:21.864795 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-26 02:26:22.664775 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-26 02:26:22.664878 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-26 02:26:22.664921 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:26:22.664936 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-26 02:26:22.664980 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-26 02:26:22.664992 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-26 02:26:22.665004 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:26:22.665015 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-26 02:26:22.665068 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-26 02:26:22.665081 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-26 02:26:22.665125 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:26:22.665138 | orchestrator | 2026-03-26 02:26:22.665150 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2026-03-26 02:26:22.665163 | orchestrator | Thursday 26 March 2026 02:26:21 +0000 (0:00:00.620) 0:00:38.861 ******** 2026-03-26 02:26:22.665176 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-26 02:26:22.665188 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-26 02:26:22.665199 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-26 02:26:22.665210 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:26:22.665222 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-26 02:26:22.665246 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-26 02:26:23.693328 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-26 02:26:23.693449 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:26:23.693464 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-26 02:26:23.693473 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-26 02:26:23.693480 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-26 02:26:23.693487 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:26:23.693493 | orchestrator | 2026-03-26 02:26:23.693500 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-03-26 02:26:23.693509 | orchestrator | Thursday 26 March 2026 02:26:22 +0000 (0:00:00.800) 0:00:39.661 ******** 2026-03-26 02:26:23.693515 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-26 02:26:23.693522 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-26 02:26:23.693544 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-26 02:26:23.693556 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:26:23.693563 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-26 02:26:23.693570 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-26 02:26:23.693576 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-26 02:26:23.693583 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:26:23.693589 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-26 02:26:23.693609 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-26 02:26:23.693619 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-26 02:26:23.693634 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:26:25.162350 | orchestrator | 2026-03-26 02:26:25.162439 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-03-26 02:26:25.162452 | orchestrator | Thursday 26 March 2026 02:26:23 +0000 (0:00:01.018) 0:00:40.680 ******** 2026-03-26 02:26:25.162464 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-26 02:26:25.162490 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-26 02:26:25.162501 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-26 02:26:25.162519 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:26:25.162529 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-26 02:26:25.162539 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-26 02:26:25.162568 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-26 02:26:25.162595 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:26:25.162622 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-26 02:26:25.162630 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-26 02:26:25.162639 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-26 02:26:25.162647 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:26:25.162655 | orchestrator | 2026-03-26 02:26:25.162663 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-03-26 02:26:25.162671 | orchestrator | Thursday 26 March 2026 02:26:24 +0000 (0:00:00.654) 0:00:41.334 ******** 2026-03-26 02:26:25.162681 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-26 02:26:25.162696 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-26 02:26:25.162723 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-26 02:26:25.162738 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:26:25.162768 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-26 02:26:26.280080 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-26 02:26:26.280256 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-26 02:26:26.280284 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:26:26.280307 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-26 02:26:26.280326 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-26 02:26:26.280345 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-26 02:26:26.280398 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:26:26.280419 | orchestrator | 2026-03-26 02:26:26.280436 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2026-03-26 02:26:26.280455 | orchestrator | Thursday 26 March 2026 02:26:25 +0000 (0:00:00.822) 0:00:42.157 ******** 2026-03-26 02:26:26.280491 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-26 02:26:26.280532 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-26 02:26:26.280546 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-26 02:26:26.280563 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:26:26.280579 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-26 02:26:26.280595 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-26 02:26:26.280623 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-26 02:26:26.280646 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:26:26.280680 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-26 02:26:26.280719 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-26 02:26:27.696586 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-26 02:26:27.696722 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:26:27.696742 | orchestrator | 2026-03-26 02:26:27.696756 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2026-03-26 02:26:27.696768 | orchestrator | Thursday 26 March 2026 02:26:26 +0000 (0:00:01.113) 0:00:43.270 ******** 2026-03-26 02:26:27.696782 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-26 02:26:27.696795 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-26 02:26:27.696831 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-26 02:26:27.696843 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:26:27.696855 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-26 02:26:27.696881 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-26 02:26:27.696915 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-26 02:26:27.696927 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:26:27.696938 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-26 02:26:27.696950 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-26 02:26:27.696974 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-26 02:26:27.696986 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:26:27.696996 | orchestrator | 2026-03-26 02:26:27.697008 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2026-03-26 02:26:27.697019 | orchestrator | Thursday 26 March 2026 02:26:26 +0000 (0:00:00.614) 0:00:43.885 ******** 2026-03-26 02:26:27.697030 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-26 02:26:27.697041 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-26 02:26:27.697067 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-26 02:26:34.398311 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:26:34.398389 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-26 02:26:34.398401 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-26 02:26:34.398427 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-26 02:26:34.398435 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:26:34.398442 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-26 02:26:34.398450 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-26 02:26:34.398470 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-26 02:26:34.398477 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:26:34.398484 | orchestrator | 2026-03-26 02:26:34.398492 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-03-26 02:26:34.398498 | orchestrator | Thursday 26 March 2026 02:26:27 +0000 (0:00:00.804) 0:00:44.689 ******** 2026-03-26 02:26:34.398502 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-26 02:26:34.398519 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-26 02:26:34.398524 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-26 02:26:34.398530 | orchestrator | 2026-03-26 02:26:34.398536 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-03-26 02:26:34.398543 | orchestrator | Thursday 26 March 2026 02:26:29 +0000 (0:00:01.699) 0:00:46.389 ******** 2026-03-26 02:26:34.398550 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-26 02:26:34.398556 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-26 02:26:34.398562 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-26 02:26:34.398569 | orchestrator | 2026-03-26 02:26:34.398581 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-03-26 02:26:34.398587 | orchestrator | Thursday 26 March 2026 02:26:31 +0000 (0:00:01.762) 0:00:48.152 ******** 2026-03-26 02:26:34.398593 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-26 02:26:34.398599 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-26 02:26:34.398605 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-26 02:26:34.398609 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-26 02:26:34.398613 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:26:34.398617 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-26 02:26:34.398620 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:26:34.398624 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-26 02:26:34.398628 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:26:34.398632 | orchestrator | 2026-03-26 02:26:34.398636 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2026-03-26 02:26:34.398639 | orchestrator | Thursday 26 March 2026 02:26:31 +0000 (0:00:00.812) 0:00:48.964 ******** 2026-03-26 02:26:34.398644 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-26 02:26:34.398648 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-26 02:26:34.398656 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-26 02:26:34.398666 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-26 02:26:38.508918 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-26 02:26:38.509032 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-26 02:26:38.509048 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-26 02:26:38.509059 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-26 02:26:38.509068 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-26 02:26:38.509078 | orchestrator | 2026-03-26 02:26:38.509173 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-03-26 02:26:38.509204 | orchestrator | Thursday 26 March 2026 02:26:34 +0000 (0:00:02.431) 0:00:51.395 ******** 2026-03-26 02:26:38.509214 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 02:26:38.509224 | orchestrator | 2026-03-26 02:26:38.509233 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-03-26 02:26:38.509242 | orchestrator | Thursday 26 March 2026 02:26:35 +0000 (0:00:00.818) 0:00:52.213 ******** 2026-03-26 02:26:38.509270 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-26 02:26:38.509302 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-26 02:26:38.509313 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-26 02:26:38.509322 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-26 02:26:38.509332 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-26 02:26:38.509346 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-26 02:26:38.509356 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-26 02:26:38.509380 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-26 02:26:39.167665 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-26 02:26:39.167745 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-26 02:26:39.167755 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-26 02:26:39.167777 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-26 02:26:39.167784 | orchestrator | 2026-03-26 02:26:39.167792 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-03-26 02:26:39.167800 | orchestrator | Thursday 26 March 2026 02:26:38 +0000 (0:00:03.286) 0:00:55.500 ******** 2026-03-26 02:26:39.167808 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-26 02:26:39.167844 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-26 02:26:39.167852 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-26 02:26:39.167859 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-26 02:26:39.167866 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:26:39.167873 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-26 02:26:39.167884 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-26 02:26:39.167895 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-26 02:26:39.167902 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-26 02:26:39.167909 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:26:39.167921 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-26 02:26:47.739731 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-26 02:26:47.739814 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-26 02:26:47.739821 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-26 02:26:47.739843 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:26:47.739850 | orchestrator | 2026-03-26 02:26:47.739855 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-03-26 02:26:47.739862 | orchestrator | Thursday 26 March 2026 02:26:39 +0000 (0:00:00.660) 0:00:56.161 ******** 2026-03-26 02:26:47.739867 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-03-26 02:26:47.739874 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-03-26 02:26:47.739881 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:26:47.739897 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-03-26 02:26:47.739902 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-03-26 02:26:47.739907 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:26:47.739912 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-03-26 02:26:47.739917 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-03-26 02:26:47.739921 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:26:47.739926 | orchestrator | 2026-03-26 02:26:47.739930 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-03-26 02:26:47.739935 | orchestrator | Thursday 26 March 2026 02:26:40 +0000 (0:00:01.127) 0:00:57.288 ******** 2026-03-26 02:26:47.739940 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:26:47.739945 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:26:47.739949 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:26:47.739954 | orchestrator | 2026-03-26 02:26:47.739959 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-03-26 02:26:47.739964 | orchestrator | Thursday 26 March 2026 02:26:41 +0000 (0:00:01.295) 0:00:58.584 ******** 2026-03-26 02:26:47.739968 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:26:47.739973 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:26:47.739978 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:26:47.739982 | orchestrator | 2026-03-26 02:26:47.739987 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-03-26 02:26:47.739992 | orchestrator | Thursday 26 March 2026 02:26:43 +0000 (0:00:02.068) 0:01:00.653 ******** 2026-03-26 02:26:47.739996 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 02:26:47.740001 | orchestrator | 2026-03-26 02:26:47.740017 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-03-26 02:26:47.740022 | orchestrator | Thursday 26 March 2026 02:26:44 +0000 (0:00:00.678) 0:01:01.331 ******** 2026-03-26 02:26:47.740028 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-26 02:26:47.740039 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-26 02:26:47.740048 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-26 02:26:47.740054 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-26 02:26:47.740059 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-26 02:26:47.740068 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-26 02:26:48.410819 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-26 02:26:48.410920 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-26 02:26:48.410933 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-26 02:26:48.410942 | orchestrator | 2026-03-26 02:26:48.410951 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-03-26 02:26:48.410961 | orchestrator | Thursday 26 March 2026 02:26:47 +0000 (0:00:03.402) 0:01:04.734 ******** 2026-03-26 02:26:48.410970 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-26 02:26:48.410978 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-26 02:26:48.411027 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-26 02:26:48.411042 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:26:48.411062 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-26 02:26:48.411075 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-26 02:26:48.411087 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-26 02:26:48.411100 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:26:48.411111 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-26 02:26:48.411223 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-26 02:26:58.354436 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-26 02:26:58.354549 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:26:58.354566 | orchestrator | 2026-03-26 02:26:58.354578 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-03-26 02:26:58.354589 | orchestrator | Thursday 26 March 2026 02:26:48 +0000 (0:00:00.668) 0:01:05.403 ******** 2026-03-26 02:26:58.354626 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-26 02:26:58.354641 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-26 02:26:58.354654 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:26:58.354665 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-26 02:26:58.354676 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-26 02:26:58.354683 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:26:58.354689 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-26 02:26:58.354696 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-26 02:26:58.354702 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:26:58.354708 | orchestrator | 2026-03-26 02:26:58.354715 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-03-26 02:26:58.354722 | orchestrator | Thursday 26 March 2026 02:26:49 +0000 (0:00:00.828) 0:01:06.232 ******** 2026-03-26 02:26:58.354728 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:26:58.354735 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:26:58.354741 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:26:58.354748 | orchestrator | 2026-03-26 02:26:58.354754 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-03-26 02:26:58.354761 | orchestrator | Thursday 26 March 2026 02:26:50 +0000 (0:00:01.596) 0:01:07.828 ******** 2026-03-26 02:26:58.354785 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:26:58.354792 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:26:58.354798 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:26:58.354804 | orchestrator | 2026-03-26 02:26:58.354811 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-03-26 02:26:58.354817 | orchestrator | Thursday 26 March 2026 02:26:52 +0000 (0:00:02.053) 0:01:09.882 ******** 2026-03-26 02:26:58.354823 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:26:58.354830 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:26:58.354836 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:26:58.354842 | orchestrator | 2026-03-26 02:26:58.354848 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-03-26 02:26:58.354855 | orchestrator | Thursday 26 March 2026 02:26:53 +0000 (0:00:00.333) 0:01:10.215 ******** 2026-03-26 02:26:58.354861 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 02:26:58.354867 | orchestrator | 2026-03-26 02:26:58.354874 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-03-26 02:26:58.354880 | orchestrator | Thursday 26 March 2026 02:26:53 +0000 (0:00:00.738) 0:01:10.954 ******** 2026-03-26 02:26:58.354904 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-03-26 02:26:58.354917 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-03-26 02:26:58.354924 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-03-26 02:26:58.354931 | orchestrator | 2026-03-26 02:26:58.354937 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-03-26 02:26:58.354945 | orchestrator | Thursday 26 March 2026 02:26:56 +0000 (0:00:02.822) 0:01:13.776 ******** 2026-03-26 02:26:58.354957 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-03-26 02:26:58.354964 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-03-26 02:26:58.354972 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:26:58.354979 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:26:58.354993 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-03-26 02:27:06.143913 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:27:06.144054 | orchestrator | 2026-03-26 02:27:06.144082 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-03-26 02:27:06.144104 | orchestrator | Thursday 26 March 2026 02:26:58 +0000 (0:00:01.575) 0:01:15.352 ******** 2026-03-26 02:27:06.144217 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-26 02:27:06.144246 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-26 02:27:06.144268 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:27:06.144319 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-26 02:27:06.144339 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-26 02:27:06.144359 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:27:06.144378 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-26 02:27:06.144397 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-26 02:27:06.144418 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:27:06.144437 | orchestrator | 2026-03-26 02:27:06.144457 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-03-26 02:27:06.144476 | orchestrator | Thursday 26 March 2026 02:26:59 +0000 (0:00:01.637) 0:01:16.990 ******** 2026-03-26 02:27:06.144496 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:27:06.144516 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:27:06.144536 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:27:06.144555 | orchestrator | 2026-03-26 02:27:06.144580 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-03-26 02:27:06.144600 | orchestrator | Thursday 26 March 2026 02:27:00 +0000 (0:00:00.440) 0:01:17.431 ******** 2026-03-26 02:27:06.144620 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:27:06.144640 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:27:06.144660 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:27:06.144679 | orchestrator | 2026-03-26 02:27:06.144698 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-03-26 02:27:06.144715 | orchestrator | Thursday 26 March 2026 02:27:01 +0000 (0:00:01.296) 0:01:18.727 ******** 2026-03-26 02:27:06.144733 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 02:27:06.144752 | orchestrator | 2026-03-26 02:27:06.144771 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-03-26 02:27:06.144790 | orchestrator | Thursday 26 March 2026 02:27:02 +0000 (0:00:01.012) 0:01:19.740 ******** 2026-03-26 02:27:06.144847 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-26 02:27:06.144886 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-26 02:27:06.144908 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-26 02:27:06.144930 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-26 02:27:06.144950 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-26 02:27:06.144982 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-26 02:27:06.875175 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-26 02:27:06.875284 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-26 02:27:06.875301 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-26 02:27:06.875315 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-26 02:27:06.875326 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-26 02:27:06.875356 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-26 02:27:06.875392 | orchestrator | 2026-03-26 02:27:06.875412 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-03-26 02:27:06.875425 | orchestrator | Thursday 26 March 2026 02:27:06 +0000 (0:00:03.487) 0:01:23.227 ******** 2026-03-26 02:27:06.875439 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-26 02:27:06.875451 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-26 02:27:06.875463 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-26 02:27:06.875475 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-26 02:27:06.875487 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:27:06.875500 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-26 02:27:06.875533 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-26 02:27:16.586240 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-26 02:27:16.586367 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-26 02:27:16.586385 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:27:16.586408 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-26 02:27:16.586421 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-26 02:27:16.586468 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-26 02:27:16.586507 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-26 02:27:16.586525 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:27:16.586541 | orchestrator | 2026-03-26 02:27:16.586557 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-03-26 02:27:16.586573 | orchestrator | Thursday 26 March 2026 02:27:06 +0000 (0:00:00.759) 0:01:23.986 ******** 2026-03-26 02:27:16.586626 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-26 02:27:16.586643 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-26 02:27:16.586659 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:27:16.586673 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-26 02:27:16.586687 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-26 02:27:16.586703 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:27:16.586721 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-26 02:27:16.586736 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-26 02:27:16.586749 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:27:16.586763 | orchestrator | 2026-03-26 02:27:16.586779 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-03-26 02:27:16.586795 | orchestrator | Thursday 26 March 2026 02:27:08 +0000 (0:00:01.222) 0:01:25.209 ******** 2026-03-26 02:27:16.586811 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:27:16.586842 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:27:16.586859 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:27:16.586876 | orchestrator | 2026-03-26 02:27:16.586887 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-03-26 02:27:16.586897 | orchestrator | Thursday 26 March 2026 02:27:09 +0000 (0:00:01.266) 0:01:26.476 ******** 2026-03-26 02:27:16.586907 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:27:16.586918 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:27:16.586928 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:27:16.586938 | orchestrator | 2026-03-26 02:27:16.586948 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-03-26 02:27:16.586958 | orchestrator | Thursday 26 March 2026 02:27:11 +0000 (0:00:02.038) 0:01:28.515 ******** 2026-03-26 02:27:16.586968 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:27:16.586978 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:27:16.586988 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:27:16.586997 | orchestrator | 2026-03-26 02:27:16.587007 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-03-26 02:27:16.587017 | orchestrator | Thursday 26 March 2026 02:27:11 +0000 (0:00:00.313) 0:01:28.828 ******** 2026-03-26 02:27:16.587027 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:27:16.587037 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:27:16.587047 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:27:16.587057 | orchestrator | 2026-03-26 02:27:16.587067 | orchestrator | TASK [include_role : designate] ************************************************ 2026-03-26 02:27:16.587077 | orchestrator | Thursday 26 March 2026 02:27:12 +0000 (0:00:00.338) 0:01:29.167 ******** 2026-03-26 02:27:16.587088 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 02:27:16.587106 | orchestrator | 2026-03-26 02:27:16.587122 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-03-26 02:27:16.587163 | orchestrator | Thursday 26 March 2026 02:27:13 +0000 (0:00:00.976) 0:01:30.143 ******** 2026-03-26 02:27:16.587206 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-26 02:27:16.864560 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-26 02:27:16.864670 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-26 02:27:16.864713 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-26 02:27:16.864727 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-26 02:27:16.864740 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-26 02:27:16.864765 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-26 02:27:16.864797 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-26 02:27:16.864810 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-26 02:27:16.864834 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-26 02:27:16.864845 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-26 02:27:16.864857 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-26 02:27:16.864874 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-26 02:27:16.864894 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-26 02:27:17.558805 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-26 02:27:17.558972 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-26 02:27:17.559001 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-26 02:27:17.559020 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-26 02:27:17.559057 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-26 02:27:17.559076 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-26 02:27:17.559120 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-26 02:27:17.559209 | orchestrator | 2026-03-26 02:27:17.559229 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-03-26 02:27:17.559247 | orchestrator | Thursday 26 March 2026 02:27:16 +0000 (0:00:03.718) 0:01:33.861 ******** 2026-03-26 02:27:17.559265 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-26 02:27:17.559281 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-26 02:27:17.559297 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-26 02:27:17.559316 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-26 02:27:17.559333 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-26 02:27:17.559363 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-26 02:27:17.949570 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-26 02:27:17.949665 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:27:17.949688 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-26 02:27:17.949697 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-26 02:27:17.950112 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-26 02:27:17.950162 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-26 02:27:17.950172 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-26 02:27:17.950217 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-26 02:27:17.950228 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-26 02:27:17.950235 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:27:17.950242 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-26 02:27:17.950249 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-26 02:27:17.950255 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-26 02:27:17.950268 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-26 02:27:17.950282 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-26 02:27:28.543302 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-26 02:27:28.543459 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-26 02:27:28.543486 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:27:28.543507 | orchestrator | 2026-03-26 02:27:28.543527 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-03-26 02:27:28.543548 | orchestrator | Thursday 26 March 2026 02:27:17 +0000 (0:00:01.078) 0:01:34.940 ******** 2026-03-26 02:27:28.543569 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-03-26 02:27:28.543589 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-03-26 02:27:28.543609 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:27:28.543629 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-03-26 02:27:28.543648 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-03-26 02:27:28.543669 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:27:28.543689 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-03-26 02:27:28.543736 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-03-26 02:27:28.543749 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:27:28.543761 | orchestrator | 2026-03-26 02:27:28.543774 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-03-26 02:27:28.543787 | orchestrator | Thursday 26 March 2026 02:27:19 +0000 (0:00:01.314) 0:01:36.254 ******** 2026-03-26 02:27:28.543800 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:27:28.543813 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:27:28.543827 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:27:28.543839 | orchestrator | 2026-03-26 02:27:28.543851 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-03-26 02:27:28.543863 | orchestrator | Thursday 26 March 2026 02:27:20 +0000 (0:00:01.332) 0:01:37.587 ******** 2026-03-26 02:27:28.543875 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:27:28.543887 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:27:28.543899 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:27:28.543915 | orchestrator | 2026-03-26 02:27:28.543934 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-03-26 02:27:28.543952 | orchestrator | Thursday 26 March 2026 02:27:22 +0000 (0:00:02.013) 0:01:39.600 ******** 2026-03-26 02:27:28.543968 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:27:28.543986 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:27:28.544004 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:27:28.544021 | orchestrator | 2026-03-26 02:27:28.544039 | orchestrator | TASK [include_role : glance] *************************************************** 2026-03-26 02:27:28.544057 | orchestrator | Thursday 26 March 2026 02:27:22 +0000 (0:00:00.354) 0:01:39.955 ******** 2026-03-26 02:27:28.544075 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 02:27:28.544093 | orchestrator | 2026-03-26 02:27:28.544110 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-03-26 02:27:28.544129 | orchestrator | Thursday 26 March 2026 02:27:24 +0000 (0:00:01.159) 0:01:41.114 ******** 2026-03-26 02:27:28.544229 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-26 02:27:28.544250 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-26 02:27:28.544299 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-26 02:27:31.510732 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-26 02:27:31.510897 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-26 02:27:31.510943 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-26 02:27:31.510970 | orchestrator | 2026-03-26 02:27:31.510986 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-03-26 02:27:31.511001 | orchestrator | Thursday 26 March 2026 02:27:28 +0000 (0:00:04.520) 0:01:45.635 ******** 2026-03-26 02:27:31.511023 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-26 02:27:31.511049 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-26 02:27:35.961311 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:27:35.961424 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-26 02:27:35.961463 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-26 02:27:35.961537 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:27:35.961573 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-26 02:27:35.961594 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-26 02:27:35.961616 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:27:35.961628 | orchestrator | 2026-03-26 02:27:35.961641 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-03-26 02:27:35.961653 | orchestrator | Thursday 26 March 2026 02:27:31 +0000 (0:00:02.970) 0:01:48.605 ******** 2026-03-26 02:27:35.961666 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-26 02:27:35.961688 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-26 02:27:44.538279 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:27:44.538374 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-26 02:27:44.538388 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-26 02:27:44.538398 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:27:44.538425 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-26 02:27:44.538448 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-26 02:27:44.538457 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:27:44.538465 | orchestrator | 2026-03-26 02:27:44.538474 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-03-26 02:27:44.538483 | orchestrator | Thursday 26 March 2026 02:27:35 +0000 (0:00:04.351) 0:01:52.957 ******** 2026-03-26 02:27:44.538511 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:27:44.538519 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:27:44.538526 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:27:44.538533 | orchestrator | 2026-03-26 02:27:44.538541 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-03-26 02:27:44.538548 | orchestrator | Thursday 26 March 2026 02:27:37 +0000 (0:00:01.349) 0:01:54.307 ******** 2026-03-26 02:27:44.538555 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:27:44.538563 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:27:44.538570 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:27:44.538577 | orchestrator | 2026-03-26 02:27:44.538589 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-03-26 02:27:44.538600 | orchestrator | Thursday 26 March 2026 02:27:39 +0000 (0:00:02.134) 0:01:56.441 ******** 2026-03-26 02:27:44.538611 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:27:44.538623 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:27:44.538634 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:27:44.538645 | orchestrator | 2026-03-26 02:27:44.538667 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-03-26 02:27:44.538689 | orchestrator | Thursday 26 March 2026 02:27:39 +0000 (0:00:00.359) 0:01:56.800 ******** 2026-03-26 02:27:44.538702 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 02:27:44.538714 | orchestrator | 2026-03-26 02:27:44.538727 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-03-26 02:27:44.538737 | orchestrator | Thursday 26 March 2026 02:27:40 +0000 (0:00:01.123) 0:01:57.924 ******** 2026-03-26 02:27:44.538770 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-26 02:27:44.538787 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-26 02:27:44.538801 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-26 02:27:44.538814 | orchestrator | 2026-03-26 02:27:44.538827 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-03-26 02:27:44.538853 | orchestrator | Thursday 26 March 2026 02:27:43 +0000 (0:00:02.977) 0:02:00.901 ******** 2026-03-26 02:27:44.538867 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-26 02:27:44.538881 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-26 02:27:44.538894 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:27:44.538908 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:27:44.538920 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-26 02:27:44.539017 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:27:44.539037 | orchestrator | 2026-03-26 02:27:44.539046 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-03-26 02:27:44.539055 | orchestrator | Thursday 26 March 2026 02:27:44 +0000 (0:00:00.400) 0:02:01.302 ******** 2026-03-26 02:27:44.539064 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-03-26 02:27:44.539084 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-03-26 02:27:53.420219 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:27:53.420340 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-03-26 02:27:53.420363 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-03-26 02:27:53.420382 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:27:53.420399 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-03-26 02:27:53.420415 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-03-26 02:27:53.420455 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:27:53.420471 | orchestrator | 2026-03-26 02:27:53.420482 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-03-26 02:27:53.420493 | orchestrator | Thursday 26 March 2026 02:27:45 +0000 (0:00:00.930) 0:02:02.232 ******** 2026-03-26 02:27:53.420502 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:27:53.420511 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:27:53.420520 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:27:53.420528 | orchestrator | 2026-03-26 02:27:53.420537 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-03-26 02:27:53.420546 | orchestrator | Thursday 26 March 2026 02:27:46 +0000 (0:00:01.294) 0:02:03.527 ******** 2026-03-26 02:27:53.420555 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:27:53.420564 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:27:53.420573 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:27:53.420581 | orchestrator | 2026-03-26 02:27:53.420590 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-03-26 02:27:53.420614 | orchestrator | Thursday 26 March 2026 02:27:48 +0000 (0:00:02.027) 0:02:05.555 ******** 2026-03-26 02:27:53.420624 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:27:53.420633 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:27:53.420641 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:27:53.420650 | orchestrator | 2026-03-26 02:27:53.420659 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-03-26 02:27:53.420668 | orchestrator | Thursday 26 March 2026 02:27:48 +0000 (0:00:00.342) 0:02:05.897 ******** 2026-03-26 02:27:53.420677 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 02:27:53.420685 | orchestrator | 2026-03-26 02:27:53.420694 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-03-26 02:27:53.420704 | orchestrator | Thursday 26 March 2026 02:27:50 +0000 (0:00:01.208) 0:02:07.106 ******** 2026-03-26 02:27:53.420739 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-26 02:27:53.420770 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-26 02:27:53.420791 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-26 02:27:55.334202 | orchestrator | 2026-03-26 02:27:55.334304 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-03-26 02:27:55.334315 | orchestrator | Thursday 26 March 2026 02:27:53 +0000 (0:00:03.313) 0:02:10.420 ******** 2026-03-26 02:27:55.334338 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-26 02:27:55.334397 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:27:55.334421 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-26 02:27:55.334446 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:27:55.334457 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-26 02:27:55.334464 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:27:55.334469 | orchestrator | 2026-03-26 02:27:55.334474 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-03-26 02:27:55.334480 | orchestrator | Thursday 26 March 2026 02:27:54 +0000 (0:00:00.742) 0:02:11.162 ******** 2026-03-26 02:27:55.334486 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-26 02:27:55.334499 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-26 02:27:55.334505 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-26 02:27:55.334517 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-26 02:28:04.307241 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-26 02:28:04.307423 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-26 02:28:04.307476 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-26 02:28:04.307498 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-26 02:28:04.307517 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-26 02:28:04.307539 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:28:04.307562 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-26 02:28:04.307581 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:28:04.307598 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-26 02:28:04.307613 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-26 02:28:04.307626 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-26 02:28:04.307670 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-26 02:28:04.307690 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-26 02:28:04.307709 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:28:04.307728 | orchestrator | 2026-03-26 02:28:04.307749 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-03-26 02:28:04.307769 | orchestrator | Thursday 26 March 2026 02:27:55 +0000 (0:00:01.167) 0:02:12.330 ******** 2026-03-26 02:28:04.307789 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:28:04.307810 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:28:04.307830 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:28:04.307846 | orchestrator | 2026-03-26 02:28:04.307860 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-03-26 02:28:04.307872 | orchestrator | Thursday 26 March 2026 02:27:56 +0000 (0:00:01.631) 0:02:13.962 ******** 2026-03-26 02:28:04.307886 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:28:04.307900 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:28:04.307912 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:28:04.307925 | orchestrator | 2026-03-26 02:28:04.307938 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-03-26 02:28:04.307950 | orchestrator | Thursday 26 March 2026 02:27:59 +0000 (0:00:02.205) 0:02:16.167 ******** 2026-03-26 02:28:04.307984 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:28:04.307997 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:28:04.308010 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:28:04.308023 | orchestrator | 2026-03-26 02:28:04.308036 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-03-26 02:28:04.308049 | orchestrator | Thursday 26 March 2026 02:27:59 +0000 (0:00:00.310) 0:02:16.478 ******** 2026-03-26 02:28:04.308062 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:28:04.308075 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:28:04.308088 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:28:04.308100 | orchestrator | 2026-03-26 02:28:04.308111 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-03-26 02:28:04.308122 | orchestrator | Thursday 26 March 2026 02:27:59 +0000 (0:00:00.346) 0:02:16.824 ******** 2026-03-26 02:28:04.308134 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 02:28:04.308145 | orchestrator | 2026-03-26 02:28:04.308180 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-03-26 02:28:04.308193 | orchestrator | Thursday 26 March 2026 02:28:01 +0000 (0:00:01.252) 0:02:18.077 ******** 2026-03-26 02:28:04.308228 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-26 02:28:04.308271 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-26 02:28:04.308293 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-26 02:28:04.308327 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-26 02:28:05.015235 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-26 02:28:05.015344 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-26 02:28:05.015383 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-26 02:28:05.015395 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-26 02:28:05.015404 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-26 02:28:05.015414 | orchestrator | 2026-03-26 02:28:05.015425 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-03-26 02:28:05.015436 | orchestrator | Thursday 26 March 2026 02:28:04 +0000 (0:00:03.224) 0:02:21.302 ******** 2026-03-26 02:28:05.015462 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-26 02:28:05.015478 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-26 02:28:05.015488 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-26 02:28:05.015505 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:28:05.015517 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-26 02:28:05.015528 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-26 02:28:05.015544 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-26 02:28:14.971452 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:28:14.971630 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-26 02:28:14.971702 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-26 02:28:14.971725 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-26 02:28:14.971746 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:28:14.971765 | orchestrator | 2026-03-26 02:28:14.971785 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-03-26 02:28:14.971806 | orchestrator | Thursday 26 March 2026 02:28:04 +0000 (0:00:00.702) 0:02:22.004 ******** 2026-03-26 02:28:14.971826 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-26 02:28:14.971849 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-26 02:28:14.971870 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:28:14.971890 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-26 02:28:14.971911 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-26 02:28:14.971931 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:28:14.971950 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-26 02:28:14.971994 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-26 02:28:14.972015 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:28:14.972035 | orchestrator | 2026-03-26 02:28:14.972055 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-03-26 02:28:14.972075 | orchestrator | Thursday 26 March 2026 02:28:06 +0000 (0:00:01.183) 0:02:23.187 ******** 2026-03-26 02:28:14.972094 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:28:14.972114 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:28:14.972147 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:28:14.972247 | orchestrator | 2026-03-26 02:28:14.972268 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-03-26 02:28:14.972288 | orchestrator | Thursday 26 March 2026 02:28:07 +0000 (0:00:01.380) 0:02:24.568 ******** 2026-03-26 02:28:14.972308 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:28:14.972327 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:28:14.972347 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:28:14.972367 | orchestrator | 2026-03-26 02:28:14.972387 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-03-26 02:28:14.972407 | orchestrator | Thursday 26 March 2026 02:28:09 +0000 (0:00:02.118) 0:02:26.686 ******** 2026-03-26 02:28:14.972427 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:28:14.972455 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:28:14.972476 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:28:14.972497 | orchestrator | 2026-03-26 02:28:14.972535 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-03-26 02:28:14.972556 | orchestrator | Thursday 26 March 2026 02:28:09 +0000 (0:00:00.315) 0:02:27.002 ******** 2026-03-26 02:28:14.972577 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 02:28:14.972596 | orchestrator | 2026-03-26 02:28:14.972616 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-03-26 02:28:14.972636 | orchestrator | Thursday 26 March 2026 02:28:11 +0000 (0:00:01.413) 0:02:28.415 ******** 2026-03-26 02:28:14.972658 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-26 02:28:14.972685 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-26 02:28:14.972708 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-26 02:28:14.972757 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-26 02:28:16.665677 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-26 02:28:16.665787 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-26 02:28:16.665805 | orchestrator | 2026-03-26 02:28:16.665820 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-03-26 02:28:16.665832 | orchestrator | Thursday 26 March 2026 02:28:14 +0000 (0:00:03.550) 0:02:31.966 ******** 2026-03-26 02:28:16.665846 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-26 02:28:16.665946 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-26 02:28:16.665988 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:28:16.666122 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-26 02:28:16.666139 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-26 02:28:16.666151 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:28:16.666192 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-26 02:28:16.666208 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-26 02:28:16.666232 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:28:16.666245 | orchestrator | 2026-03-26 02:28:16.666257 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-03-26 02:28:16.666270 | orchestrator | Thursday 26 March 2026 02:28:15 +0000 (0:00:00.753) 0:02:32.720 ******** 2026-03-26 02:28:16.666284 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-03-26 02:28:16.666299 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-03-26 02:28:16.666313 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:28:16.666325 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-03-26 02:28:16.666338 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-03-26 02:28:16.666350 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:28:16.666363 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-03-26 02:28:16.666385 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-03-26 02:28:25.470206 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:28:25.470338 | orchestrator | 2026-03-26 02:28:25.470381 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-03-26 02:28:25.470395 | orchestrator | Thursday 26 March 2026 02:28:16 +0000 (0:00:00.941) 0:02:33.661 ******** 2026-03-26 02:28:25.470406 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:28:25.470416 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:28:25.470426 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:28:25.470436 | orchestrator | 2026-03-26 02:28:25.470446 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-03-26 02:28:25.470456 | orchestrator | Thursday 26 March 2026 02:28:18 +0000 (0:00:01.687) 0:02:35.349 ******** 2026-03-26 02:28:25.470466 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:28:25.470476 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:28:25.470485 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:28:25.470495 | orchestrator | 2026-03-26 02:28:25.470505 | orchestrator | TASK [include_role : manila] *************************************************** 2026-03-26 02:28:25.470515 | orchestrator | Thursday 26 March 2026 02:28:20 +0000 (0:00:02.132) 0:02:37.481 ******** 2026-03-26 02:28:25.470525 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 02:28:25.470535 | orchestrator | 2026-03-26 02:28:25.470545 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-03-26 02:28:25.470555 | orchestrator | Thursday 26 March 2026 02:28:21 +0000 (0:00:01.120) 0:02:38.601 ******** 2026-03-26 02:28:25.470568 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-26 02:28:25.470605 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-26 02:28:25.470618 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-26 02:28:25.470629 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-26 02:28:25.470665 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-26 02:28:25.470677 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-26 02:28:25.470688 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-26 02:28:25.470706 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-26 02:28:25.470717 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-26 02:28:25.470731 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-26 02:28:25.470771 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-26 02:28:26.548349 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-26 02:28:26.548445 | orchestrator | 2026-03-26 02:28:26.548457 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-03-26 02:28:26.548471 | orchestrator | Thursday 26 March 2026 02:28:25 +0000 (0:00:03.958) 0:02:42.560 ******** 2026-03-26 02:28:26.548507 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-26 02:28:26.548515 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-26 02:28:26.548522 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-26 02:28:26.548530 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-26 02:28:26.548536 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:28:26.548575 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-26 02:28:26.548583 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-26 02:28:26.548595 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-26 02:28:26.548601 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-26 02:28:26.548607 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:28:26.548614 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-26 02:28:26.548620 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-26 02:28:26.548634 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-26 02:28:38.023981 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-26 02:28:38.024138 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:28:38.024166 | orchestrator | 2026-03-26 02:28:38.024218 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-03-26 02:28:38.024237 | orchestrator | Thursday 26 March 2026 02:28:26 +0000 (0:00:01.091) 0:02:43.651 ******** 2026-03-26 02:28:38.024257 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-03-26 02:28:38.024269 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-03-26 02:28:38.024281 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:28:38.024306 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-03-26 02:28:38.024326 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-03-26 02:28:38.024337 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:28:38.024346 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-03-26 02:28:38.024356 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-03-26 02:28:38.024377 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:28:38.024397 | orchestrator | 2026-03-26 02:28:38.024407 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-03-26 02:28:38.024417 | orchestrator | Thursday 26 March 2026 02:28:27 +0000 (0:00:00.947) 0:02:44.599 ******** 2026-03-26 02:28:38.024427 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:28:38.024436 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:28:38.024446 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:28:38.024455 | orchestrator | 2026-03-26 02:28:38.024465 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-03-26 02:28:38.024475 | orchestrator | Thursday 26 March 2026 02:28:28 +0000 (0:00:01.299) 0:02:45.899 ******** 2026-03-26 02:28:38.024486 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:28:38.024498 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:28:38.024509 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:28:38.024520 | orchestrator | 2026-03-26 02:28:38.024531 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-03-26 02:28:38.024542 | orchestrator | Thursday 26 March 2026 02:28:31 +0000 (0:00:02.141) 0:02:48.041 ******** 2026-03-26 02:28:38.024553 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 02:28:38.024564 | orchestrator | 2026-03-26 02:28:38.024576 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-03-26 02:28:38.024587 | orchestrator | Thursday 26 March 2026 02:28:32 +0000 (0:00:01.387) 0:02:49.428 ******** 2026-03-26 02:28:38.024599 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-26 02:28:38.024610 | orchestrator | 2026-03-26 02:28:38.024626 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-03-26 02:28:38.024658 | orchestrator | Thursday 26 March 2026 02:28:35 +0000 (0:00:03.120) 0:02:52.548 ******** 2026-03-26 02:28:38.024735 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-26 02:28:38.024759 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-26 02:28:38.024772 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:28:38.024791 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-26 02:28:38.024824 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-26 02:28:40.521744 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:28:40.521820 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-26 02:28:40.521829 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-26 02:28:40.521834 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:28:40.521838 | orchestrator | 2026-03-26 02:28:40.521843 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-03-26 02:28:40.521848 | orchestrator | Thursday 26 March 2026 02:28:38 +0000 (0:00:02.467) 0:02:55.015 ******** 2026-03-26 02:28:40.521893 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-26 02:28:40.521899 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-26 02:28:40.521904 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:28:40.521908 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-26 02:28:40.521924 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-26 02:28:40.521928 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:28:40.521936 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-26 02:28:50.623069 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-26 02:28:50.623274 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:28:50.623306 | orchestrator | 2026-03-26 02:28:50.623327 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-03-26 02:28:50.623349 | orchestrator | Thursday 26 March 2026 02:28:40 +0000 (0:00:02.500) 0:02:57.516 ******** 2026-03-26 02:28:50.623370 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-26 02:28:50.623418 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-26 02:28:50.623447 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:28:50.623459 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-26 02:28:50.623471 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-26 02:28:50.623483 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:28:50.623494 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-26 02:28:50.623529 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-26 02:28:50.623541 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:28:50.623554 | orchestrator | 2026-03-26 02:28:50.623567 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-03-26 02:28:50.623580 | orchestrator | Thursday 26 March 2026 02:28:43 +0000 (0:00:03.062) 0:03:00.579 ******** 2026-03-26 02:28:50.623593 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:28:50.623615 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:28:50.623666 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:28:50.623678 | orchestrator | 2026-03-26 02:28:50.623690 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-03-26 02:28:50.623703 | orchestrator | Thursday 26 March 2026 02:28:45 +0000 (0:00:02.181) 0:03:02.760 ******** 2026-03-26 02:28:50.623717 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:28:50.623737 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:28:50.623876 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:28:50.623897 | orchestrator | 2026-03-26 02:28:50.623916 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-03-26 02:28:50.623935 | orchestrator | Thursday 26 March 2026 02:28:47 +0000 (0:00:01.460) 0:03:04.221 ******** 2026-03-26 02:28:50.623954 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:28:50.623973 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:28:50.623992 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:28:50.624010 | orchestrator | 2026-03-26 02:28:50.624030 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-03-26 02:28:50.624049 | orchestrator | Thursday 26 March 2026 02:28:47 +0000 (0:00:00.346) 0:03:04.568 ******** 2026-03-26 02:28:50.624068 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 02:28:50.624087 | orchestrator | 2026-03-26 02:28:50.624107 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-03-26 02:28:50.624127 | orchestrator | Thursday 26 March 2026 02:28:49 +0000 (0:00:01.452) 0:03:06.021 ******** 2026-03-26 02:28:50.624158 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-26 02:28:50.624283 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-26 02:28:50.624311 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-26 02:28:50.624331 | orchestrator | 2026-03-26 02:28:50.624350 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-03-26 02:28:50.624405 | orchestrator | Thursday 26 March 2026 02:28:50 +0000 (0:00:01.598) 0:03:07.619 ******** 2026-03-26 02:28:59.606883 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-26 02:28:59.606988 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:28:59.607006 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-26 02:28:59.607016 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:28:59.607026 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-26 02:28:59.607042 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:28:59.607057 | orchestrator | 2026-03-26 02:28:59.607072 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-03-26 02:28:59.607089 | orchestrator | Thursday 26 March 2026 02:28:51 +0000 (0:00:00.405) 0:03:08.024 ******** 2026-03-26 02:28:59.607103 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-26 02:28:59.607120 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:28:59.607136 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-26 02:28:59.607152 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:28:59.607168 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-26 02:28:59.607248 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:28:59.607260 | orchestrator | 2026-03-26 02:28:59.607339 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-03-26 02:28:59.607358 | orchestrator | Thursday 26 March 2026 02:28:51 +0000 (0:00:00.952) 0:03:08.977 ******** 2026-03-26 02:28:59.607368 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:28:59.607378 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:28:59.607388 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:28:59.607399 | orchestrator | 2026-03-26 02:28:59.607409 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-03-26 02:28:59.607419 | orchestrator | Thursday 26 March 2026 02:28:52 +0000 (0:00:00.443) 0:03:09.420 ******** 2026-03-26 02:28:59.607429 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:28:59.607439 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:28:59.607469 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:28:59.607479 | orchestrator | 2026-03-26 02:28:59.607489 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-03-26 02:28:59.607499 | orchestrator | Thursday 26 March 2026 02:28:53 +0000 (0:00:01.457) 0:03:10.878 ******** 2026-03-26 02:28:59.607509 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:28:59.607519 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:28:59.607529 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:28:59.607538 | orchestrator | 2026-03-26 02:28:59.607549 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-03-26 02:28:59.607559 | orchestrator | Thursday 26 March 2026 02:28:54 +0000 (0:00:00.351) 0:03:11.230 ******** 2026-03-26 02:28:59.607569 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 02:28:59.607579 | orchestrator | 2026-03-26 02:28:59.607588 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-03-26 02:28:59.607598 | orchestrator | Thursday 26 March 2026 02:28:55 +0000 (0:00:01.591) 0:03:12.822 ******** 2026-03-26 02:28:59.607610 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-26 02:28:59.607629 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-26 02:28:59.607641 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-26 02:28:59.607663 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-26 02:28:59.607682 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-26 02:28:59.830324 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-26 02:28:59.830403 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-26 02:28:59.830428 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-26 02:28:59.830437 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-26 02:28:59.830463 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-26 02:28:59.830471 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-26 02:28:59.830490 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-26 02:28:59.830497 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-26 02:28:59.830504 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-26 02:28:59.830517 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-26 02:28:59.830530 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-26 02:28:59.830538 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-26 02:28:59.830550 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-26 02:28:59.953323 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-26 02:28:59.953437 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-26 02:28:59.953486 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-26 02:28:59.953503 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-26 02:28:59.953523 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-26 02:28:59.953564 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-26 02:28:59.953589 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-26 02:28:59.953617 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-26 02:28:59.953636 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-26 02:28:59.953653 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-26 02:28:59.953669 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-26 02:28:59.953695 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-26 02:29:00.084360 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-26 02:29:00.084501 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-26 02:29:00.084519 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-26 02:29:00.084531 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-26 02:29:00.084541 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-26 02:29:00.084552 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-26 02:29:00.084580 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-26 02:29:00.084610 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-26 02:29:00.084620 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-26 02:29:00.084629 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-26 02:29:00.084640 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-26 02:29:00.084650 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-26 02:29:00.084659 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-26 02:29:00.084681 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-26 02:29:01.289546 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-26 02:29:01.289636 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-26 02:29:01.289647 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-26 02:29:01.289657 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-26 02:29:01.289671 | orchestrator | 2026-03-26 02:29:01.289680 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-03-26 02:29:01.289711 | orchestrator | Thursday 26 March 2026 02:29:00 +0000 (0:00:04.359) 0:03:17.182 ******** 2026-03-26 02:29:01.289732 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-26 02:29:01.289769 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-26 02:29:01.289778 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-26 02:29:01.289785 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-26 02:29:01.289800 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-26 02:29:01.289817 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-26 02:29:01.289830 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-26 02:29:01.391115 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-26 02:29:01.391239 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-26 02:29:01.391250 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-26 02:29:01.391258 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-26 02:29:01.391289 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-26 02:29:01.391310 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-26 02:29:01.391334 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-26 02:29:01.391343 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-26 02:29:01.391350 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-26 02:29:01.391359 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-26 02:29:01.391378 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-26 02:29:01.391392 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-26 02:29:01.485740 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:29:01.485833 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-26 02:29:01.485844 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-26 02:29:01.485854 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-26 02:29:01.485884 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-26 02:29:01.485905 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-26 02:29:01.485929 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-26 02:29:01.485939 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-26 02:29:01.485947 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-26 02:29:01.485955 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-26 02:29:01.485968 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-26 02:29:01.485977 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-26 02:29:01.485989 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-26 02:29:01.605557 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-26 02:29:01.605665 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-26 02:29:01.605767 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-26 02:29:01.605810 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-26 02:29:01.605825 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-26 02:29:01.605842 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-26 02:29:01.605876 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-26 02:29:01.605889 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-26 02:29:01.605900 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-26 02:29:01.605920 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-26 02:29:01.605938 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-26 02:29:01.605953 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-26 02:29:01.605985 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-26 02:29:12.155003 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:29:12.155147 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-26 02:29:12.155167 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-26 02:29:12.155254 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-26 02:29:12.155285 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-26 02:29:12.155297 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:29:12.155307 | orchestrator | 2026-03-26 02:29:12.155318 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-03-26 02:29:12.155329 | orchestrator | Thursday 26 March 2026 02:29:01 +0000 (0:00:01.524) 0:03:18.706 ******** 2026-03-26 02:29:12.155340 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-03-26 02:29:12.155352 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-03-26 02:29:12.155364 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:29:12.155373 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-03-26 02:29:12.155383 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-03-26 02:29:12.155393 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:29:12.155419 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-03-26 02:29:12.155430 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-03-26 02:29:12.155448 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:29:12.155458 | orchestrator | 2026-03-26 02:29:12.155468 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-03-26 02:29:12.155478 | orchestrator | Thursday 26 March 2026 02:29:03 +0000 (0:00:02.029) 0:03:20.736 ******** 2026-03-26 02:29:12.155488 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:29:12.155498 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:29:12.155508 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:29:12.155518 | orchestrator | 2026-03-26 02:29:12.155528 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-03-26 02:29:12.155538 | orchestrator | Thursday 26 March 2026 02:29:05 +0000 (0:00:01.377) 0:03:22.113 ******** 2026-03-26 02:29:12.155547 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:29:12.155557 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:29:12.155567 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:29:12.155576 | orchestrator | 2026-03-26 02:29:12.155586 | orchestrator | TASK [include_role : placement] ************************************************ 2026-03-26 02:29:12.155596 | orchestrator | Thursday 26 March 2026 02:29:07 +0000 (0:00:02.167) 0:03:24.281 ******** 2026-03-26 02:29:12.155606 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 02:29:12.155616 | orchestrator | 2026-03-26 02:29:12.155625 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-03-26 02:29:12.155635 | orchestrator | Thursday 26 March 2026 02:29:08 +0000 (0:00:01.251) 0:03:25.533 ******** 2026-03-26 02:29:12.155646 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-26 02:29:12.155664 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-26 02:29:12.155675 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-26 02:29:12.155692 | orchestrator | 2026-03-26 02:29:12.155708 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-03-26 02:29:23.184041 | orchestrator | Thursday 26 March 2026 02:29:12 +0000 (0:00:03.611) 0:03:29.144 ******** 2026-03-26 02:29:23.184166 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-26 02:29:23.184187 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:29:23.184259 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-26 02:29:23.184272 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:29:23.184327 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-26 02:29:23.184343 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:29:23.184357 | orchestrator | 2026-03-26 02:29:23.184372 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-03-26 02:29:23.184385 | orchestrator | Thursday 26 March 2026 02:29:12 +0000 (0:00:00.545) 0:03:29.690 ******** 2026-03-26 02:29:23.184401 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-26 02:29:23.184446 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-26 02:29:23.184460 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:29:23.184472 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-26 02:29:23.184485 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-26 02:29:23.184498 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:29:23.184531 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-26 02:29:23.184543 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-26 02:29:23.184555 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:29:23.184568 | orchestrator | 2026-03-26 02:29:23.184582 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-03-26 02:29:23.184596 | orchestrator | Thursday 26 March 2026 02:29:13 +0000 (0:00:00.799) 0:03:30.490 ******** 2026-03-26 02:29:23.184609 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:29:23.184623 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:29:23.184637 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:29:23.184650 | orchestrator | 2026-03-26 02:29:23.184663 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-03-26 02:29:23.184677 | orchestrator | Thursday 26 March 2026 02:29:15 +0000 (0:00:01.966) 0:03:32.457 ******** 2026-03-26 02:29:23.184689 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:29:23.184702 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:29:23.184715 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:29:23.184728 | orchestrator | 2026-03-26 02:29:23.184741 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-03-26 02:29:23.184755 | orchestrator | Thursday 26 March 2026 02:29:17 +0000 (0:00:01.930) 0:03:34.387 ******** 2026-03-26 02:29:23.184769 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 02:29:23.184783 | orchestrator | 2026-03-26 02:29:23.184797 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-03-26 02:29:23.184811 | orchestrator | Thursday 26 March 2026 02:29:18 +0000 (0:00:01.583) 0:03:35.971 ******** 2026-03-26 02:29:23.184829 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-26 02:29:23.184865 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-26 02:29:23.184881 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-26 02:29:23.184907 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-26 02:29:24.608992 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-26 02:29:24.609104 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-26 02:29:24.609159 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-26 02:29:24.609173 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-26 02:29:24.609184 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-26 02:29:24.609219 | orchestrator | 2026-03-26 02:29:24.609233 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-03-26 02:29:24.609244 | orchestrator | Thursday 26 March 2026 02:29:23 +0000 (0:00:04.202) 0:03:40.173 ******** 2026-03-26 02:29:24.609276 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-26 02:29:24.609299 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-26 02:29:24.609316 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-26 02:29:24.609327 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:29:24.609339 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-26 02:29:24.609357 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-26 02:29:36.001927 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-26 02:29:36.002175 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:29:36.002270 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-26 02:29:36.002367 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-26 02:29:36.002394 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-26 02:29:36.002413 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:29:36.002434 | orchestrator | 2026-03-26 02:29:36.002453 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-03-26 02:29:36.002474 | orchestrator | Thursday 26 March 2026 02:29:24 +0000 (0:00:01.431) 0:03:41.605 ******** 2026-03-26 02:29:36.002497 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-26 02:29:36.002520 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-26 02:29:36.002543 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-26 02:29:36.002591 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-26 02:29:36.002616 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:29:36.002636 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-26 02:29:36.002655 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-26 02:29:36.002694 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-26 02:29:36.002713 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-26 02:29:36.002730 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:29:36.002750 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-26 02:29:36.002770 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-26 02:29:36.002801 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-26 02:29:36.002822 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-26 02:29:36.002842 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:29:36.002861 | orchestrator | 2026-03-26 02:29:36.002881 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-03-26 02:29:36.002899 | orchestrator | Thursday 26 March 2026 02:29:25 +0000 (0:00:01.078) 0:03:42.683 ******** 2026-03-26 02:29:36.002916 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:29:36.002935 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:29:36.002955 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:29:36.002974 | orchestrator | 2026-03-26 02:29:36.002992 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-03-26 02:29:36.003011 | orchestrator | Thursday 26 March 2026 02:29:27 +0000 (0:00:01.507) 0:03:44.190 ******** 2026-03-26 02:29:36.003030 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:29:36.003048 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:29:36.003067 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:29:36.003085 | orchestrator | 2026-03-26 02:29:36.003104 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-03-26 02:29:36.003123 | orchestrator | Thursday 26 March 2026 02:29:29 +0000 (0:00:02.166) 0:03:46.356 ******** 2026-03-26 02:29:36.003141 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 02:29:36.003159 | orchestrator | 2026-03-26 02:29:36.003171 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-03-26 02:29:36.003181 | orchestrator | Thursday 26 March 2026 02:29:31 +0000 (0:00:01.659) 0:03:48.016 ******** 2026-03-26 02:29:36.003192 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-03-26 02:29:36.003297 | orchestrator | 2026-03-26 02:29:36.003310 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-03-26 02:29:36.003321 | orchestrator | Thursday 26 March 2026 02:29:31 +0000 (0:00:00.881) 0:03:48.897 ******** 2026-03-26 02:29:36.003334 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-26 02:29:36.003374 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-26 02:29:47.800515 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-26 02:29:47.800621 | orchestrator | 2026-03-26 02:29:47.800633 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-03-26 02:29:47.800644 | orchestrator | Thursday 26 March 2026 02:29:35 +0000 (0:00:04.092) 0:03:52.990 ******** 2026-03-26 02:29:47.800654 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-26 02:29:47.800663 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:29:47.800687 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-26 02:29:47.800696 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:29:47.800704 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-26 02:29:47.800713 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:29:47.800721 | orchestrator | 2026-03-26 02:29:47.800729 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-03-26 02:29:47.800739 | orchestrator | Thursday 26 March 2026 02:29:37 +0000 (0:00:01.469) 0:03:54.460 ******** 2026-03-26 02:29:47.800749 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-26 02:29:47.800760 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-26 02:29:47.800790 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:29:47.800799 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-26 02:29:47.800807 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-26 02:29:47.800816 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:29:47.800824 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-26 02:29:47.800832 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-26 02:29:47.800855 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:29:47.800864 | orchestrator | 2026-03-26 02:29:47.800873 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-26 02:29:47.800887 | orchestrator | Thursday 26 March 2026 02:29:38 +0000 (0:00:01.527) 0:03:55.987 ******** 2026-03-26 02:29:47.800900 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:29:47.800920 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:29:47.800936 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:29:47.800949 | orchestrator | 2026-03-26 02:29:47.800962 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-26 02:29:47.800974 | orchestrator | Thursday 26 March 2026 02:29:41 +0000 (0:00:02.483) 0:03:58.470 ******** 2026-03-26 02:29:47.800987 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:29:47.801002 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:29:47.801014 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:29:47.801028 | orchestrator | 2026-03-26 02:29:47.801042 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-03-26 02:29:47.801056 | orchestrator | Thursday 26 March 2026 02:29:44 +0000 (0:00:02.870) 0:04:01.341 ******** 2026-03-26 02:29:47.801070 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-03-26 02:29:47.801086 | orchestrator | 2026-03-26 02:29:47.801100 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-03-26 02:29:47.801114 | orchestrator | Thursday 26 March 2026 02:29:45 +0000 (0:00:01.117) 0:04:02.458 ******** 2026-03-26 02:29:47.801139 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-26 02:29:47.801155 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:29:47.801170 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-26 02:29:47.801197 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:29:47.801234 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-26 02:29:47.801244 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:29:47.801253 | orchestrator | 2026-03-26 02:29:47.801262 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-03-26 02:29:47.801272 | orchestrator | Thursday 26 March 2026 02:29:46 +0000 (0:00:01.039) 0:04:03.498 ******** 2026-03-26 02:29:47.801281 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-26 02:29:47.801291 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:29:47.801300 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-26 02:29:47.801318 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:30:12.069687 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-26 02:30:12.069787 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:30:12.069800 | orchestrator | 2026-03-26 02:30:12.069809 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-03-26 02:30:12.069818 | orchestrator | Thursday 26 March 2026 02:29:47 +0000 (0:00:01.291) 0:04:04.789 ******** 2026-03-26 02:30:12.069827 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:30:12.069834 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:30:12.069842 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:30:12.069849 | orchestrator | 2026-03-26 02:30:12.069857 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-26 02:30:12.069865 | orchestrator | Thursday 26 March 2026 02:29:49 +0000 (0:00:01.606) 0:04:06.396 ******** 2026-03-26 02:30:12.069872 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:30:12.069881 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:30:12.069888 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:30:12.069896 | orchestrator | 2026-03-26 02:30:12.069903 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-26 02:30:12.069911 | orchestrator | Thursday 26 March 2026 02:29:52 +0000 (0:00:02.774) 0:04:09.171 ******** 2026-03-26 02:30:12.069936 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:30:12.069944 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:30:12.069951 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:30:12.069959 | orchestrator | 2026-03-26 02:30:12.069979 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-03-26 02:30:12.069986 | orchestrator | Thursday 26 March 2026 02:29:54 +0000 (0:00:02.714) 0:04:11.885 ******** 2026-03-26 02:30:12.069994 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-03-26 02:30:12.070003 | orchestrator | 2026-03-26 02:30:12.070011 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-03-26 02:30:12.070054 | orchestrator | Thursday 26 March 2026 02:29:56 +0000 (0:00:01.265) 0:04:13.151 ******** 2026-03-26 02:30:12.070062 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-26 02:30:12.070070 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:30:12.070078 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-26 02:30:12.070086 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:30:12.070093 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-26 02:30:12.070101 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:30:12.070108 | orchestrator | 2026-03-26 02:30:12.070116 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-03-26 02:30:12.070124 | orchestrator | Thursday 26 March 2026 02:29:57 +0000 (0:00:01.376) 0:04:14.527 ******** 2026-03-26 02:30:12.070146 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-26 02:30:12.070154 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:30:12.070162 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-26 02:30:12.070176 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:30:12.070184 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-26 02:30:12.070191 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:30:12.070199 | orchestrator | 2026-03-26 02:30:12.070241 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-03-26 02:30:12.070252 | orchestrator | Thursday 26 March 2026 02:29:58 +0000 (0:00:01.422) 0:04:15.950 ******** 2026-03-26 02:30:12.070261 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:30:12.070269 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:30:12.070278 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:30:12.070286 | orchestrator | 2026-03-26 02:30:12.070294 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-26 02:30:12.070303 | orchestrator | Thursday 26 March 2026 02:30:00 +0000 (0:00:01.896) 0:04:17.847 ******** 2026-03-26 02:30:12.070311 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:30:12.070323 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:30:12.070336 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:30:12.070348 | orchestrator | 2026-03-26 02:30:12.070360 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-26 02:30:12.070372 | orchestrator | Thursday 26 March 2026 02:30:04 +0000 (0:00:03.266) 0:04:21.114 ******** 2026-03-26 02:30:12.070383 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:30:12.070394 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:30:12.070405 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:30:12.070416 | orchestrator | 2026-03-26 02:30:12.070428 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-03-26 02:30:12.070439 | orchestrator | Thursday 26 March 2026 02:30:07 +0000 (0:00:03.279) 0:04:24.394 ******** 2026-03-26 02:30:12.070451 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 02:30:12.070463 | orchestrator | 2026-03-26 02:30:12.070475 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-03-26 02:30:12.070487 | orchestrator | Thursday 26 March 2026 02:30:08 +0000 (0:00:01.483) 0:04:25.878 ******** 2026-03-26 02:30:12.070500 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-26 02:30:12.070514 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-26 02:30:12.070547 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-26 02:30:12.808664 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-26 02:30:12.808766 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-26 02:30:12.808781 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-26 02:30:12.808792 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-26 02:30:12.808802 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-26 02:30:12.808828 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-26 02:30:12.808853 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-26 02:30:12.808862 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-26 02:30:12.808871 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-26 02:30:12.808879 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-26 02:30:12.808887 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-26 02:30:12.808929 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-26 02:30:12.808939 | orchestrator | 2026-03-26 02:30:12.808948 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-03-26 02:30:12.808958 | orchestrator | Thursday 26 March 2026 02:30:12 +0000 (0:00:03.331) 0:04:29.209 ******** 2026-03-26 02:30:12.808978 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-26 02:30:12.954286 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-26 02:30:12.954389 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-26 02:30:12.954405 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-26 02:30:12.954419 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-26 02:30:12.954455 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:30:12.954470 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-26 02:30:12.954484 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-26 02:30:12.954527 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-26 02:30:12.954541 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-26 02:30:12.954553 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-26 02:30:12.954571 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:30:12.954583 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-26 02:30:12.954612 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-26 02:30:12.954634 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-26 02:30:12.954660 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-26 02:30:24.714322 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-26 02:30:24.714462 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:30:24.714485 | orchestrator | 2026-03-26 02:30:24.714511 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-03-26 02:30:24.714532 | orchestrator | Thursday 26 March 2026 02:30:12 +0000 (0:00:00.741) 0:04:29.950 ******** 2026-03-26 02:30:24.714550 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-26 02:30:24.714598 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-26 02:30:24.714617 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:30:24.714634 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-26 02:30:24.714653 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-26 02:30:24.714671 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:30:24.714686 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-26 02:30:24.714702 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-26 02:30:24.714718 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:30:24.714735 | orchestrator | 2026-03-26 02:30:24.714753 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-03-26 02:30:24.714768 | orchestrator | Thursday 26 March 2026 02:30:13 +0000 (0:00:00.964) 0:04:30.915 ******** 2026-03-26 02:30:24.714785 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:30:24.714795 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:30:24.714807 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:30:24.714818 | orchestrator | 2026-03-26 02:30:24.714829 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-03-26 02:30:24.714841 | orchestrator | Thursday 26 March 2026 02:30:15 +0000 (0:00:01.802) 0:04:32.717 ******** 2026-03-26 02:30:24.714852 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:30:24.714864 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:30:24.714875 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:30:24.714885 | orchestrator | 2026-03-26 02:30:24.714895 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-03-26 02:30:24.714905 | orchestrator | Thursday 26 March 2026 02:30:17 +0000 (0:00:02.208) 0:04:34.926 ******** 2026-03-26 02:30:24.714915 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 02:30:24.714926 | orchestrator | 2026-03-26 02:30:24.714935 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-03-26 02:30:24.714945 | orchestrator | Thursday 26 March 2026 02:30:19 +0000 (0:00:01.424) 0:04:36.351 ******** 2026-03-26 02:30:24.714985 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-26 02:30:24.715038 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-26 02:30:24.715071 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-26 02:30:24.715091 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-26 02:30:24.715117 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-26 02:30:24.715149 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-26 02:30:26.773440 | orchestrator | 2026-03-26 02:30:26.773537 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-03-26 02:30:26.773549 | orchestrator | Thursday 26 March 2026 02:30:24 +0000 (0:00:05.349) 0:04:41.700 ******** 2026-03-26 02:30:26.773558 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-26 02:30:26.773569 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-26 02:30:26.773582 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:30:26.773605 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-26 02:30:26.773613 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-26 02:30:26.773651 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:30:26.773658 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-26 02:30:26.773665 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-26 02:30:26.773672 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:30:26.773679 | orchestrator | 2026-03-26 02:30:26.773685 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-03-26 02:30:26.773692 | orchestrator | Thursday 26 March 2026 02:30:25 +0000 (0:00:01.069) 0:04:42.769 ******** 2026-03-26 02:30:26.773700 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-03-26 02:30:26.773708 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-26 02:30:26.773718 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-26 02:30:26.773732 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:30:26.773743 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-03-26 02:30:26.773749 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-26 02:30:26.773756 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-26 02:30:26.773762 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:30:26.773769 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-03-26 02:30:26.773775 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-26 02:30:26.773838 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-26 02:30:33.010379 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:30:33.010507 | orchestrator | 2026-03-26 02:30:33.010526 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-03-26 02:30:33.010540 | orchestrator | Thursday 26 March 2026 02:30:26 +0000 (0:00:00.994) 0:04:43.764 ******** 2026-03-26 02:30:33.010552 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:30:33.010563 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:30:33.010575 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:30:33.010586 | orchestrator | 2026-03-26 02:30:33.010597 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-03-26 02:30:33.010609 | orchestrator | Thursday 26 March 2026 02:30:27 +0000 (0:00:00.451) 0:04:44.216 ******** 2026-03-26 02:30:33.010620 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:30:33.010648 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:30:33.010660 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:30:33.010682 | orchestrator | 2026-03-26 02:30:33.010694 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-03-26 02:30:33.010706 | orchestrator | Thursday 26 March 2026 02:30:28 +0000 (0:00:01.471) 0:04:45.688 ******** 2026-03-26 02:30:33.010717 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 02:30:33.010729 | orchestrator | 2026-03-26 02:30:33.010740 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-03-26 02:30:33.010752 | orchestrator | Thursday 26 March 2026 02:30:30 +0000 (0:00:01.846) 0:04:47.534 ******** 2026-03-26 02:30:33.010767 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-26 02:30:33.010814 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-26 02:30:33.010842 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 02:30:33.010856 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 02:30:33.010868 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-26 02:30:33.010902 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-26 02:30:33.010915 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-26 02:30:33.010927 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 02:30:33.010947 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 02:30:33.010959 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-26 02:30:33.010975 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-26 02:30:33.010988 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-26 02:30:33.011008 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 02:30:34.574496 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 02:30:34.574617 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-26 02:30:34.574671 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-26 02:30:34.574711 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-26 02:30:34.574729 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 02:30:34.574745 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 02:30:34.574776 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-26 02:30:34.574786 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-26 02:30:34.574805 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-26 02:30:34.574819 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 02:30:34.574829 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 02:30:34.574838 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-26 02:30:34.574856 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-26 02:30:35.302452 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-26 02:30:35.302542 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 02:30:35.302580 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 02:30:35.302594 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-26 02:30:35.302607 | orchestrator | 2026-03-26 02:30:35.302621 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-03-26 02:30:35.302634 | orchestrator | Thursday 26 March 2026 02:30:34 +0000 (0:00:04.189) 0:04:51.723 ******** 2026-03-26 02:30:35.302648 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-26 02:30:35.302662 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-26 02:30:35.302719 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 02:30:35.302734 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 02:30:35.302752 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-26 02:30:35.302788 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-26 02:30:35.302805 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-26 02:30:35.302828 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-26 02:30:35.435863 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 02:30:35.435950 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-26 02:30:35.435976 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 02:30:35.435986 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 02:30:35.435996 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-26 02:30:35.436006 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:30:35.436017 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 02:30:35.436026 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-26 02:30:35.436070 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-26 02:30:35.436082 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-26 02:30:35.436096 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-26 02:30:35.436105 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 02:30:35.436114 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-26 02:30:35.436128 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 02:30:35.436143 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 02:30:37.082619 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-26 02:30:37.082735 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:30:37.082760 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 02:30:37.082796 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-26 02:30:37.082816 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-26 02:30:37.082833 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-26 02:30:37.082872 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 02:30:37.082907 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 02:30:37.082917 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-26 02:30:37.082926 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:30:37.082934 | orchestrator | 2026-03-26 02:30:37.082944 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-03-26 02:30:37.082953 | orchestrator | Thursday 26 March 2026 02:30:35 +0000 (0:00:00.854) 0:04:52.577 ******** 2026-03-26 02:30:37.082967 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-03-26 02:30:37.082978 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-03-26 02:30:37.082988 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-26 02:30:37.082999 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-26 02:30:37.083009 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:30:37.083017 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-03-26 02:30:37.083033 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-03-26 02:30:37.083042 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-26 02:30:37.083050 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-26 02:30:37.083058 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:30:37.083066 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-03-26 02:30:37.083074 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-03-26 02:30:37.083082 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-26 02:30:37.083096 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-26 02:30:44.951510 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:30:44.951632 | orchestrator | 2026-03-26 02:30:44.951650 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-03-26 02:30:44.951664 | orchestrator | Thursday 26 March 2026 02:30:37 +0000 (0:00:01.495) 0:04:54.073 ******** 2026-03-26 02:30:44.951675 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:30:44.951687 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:30:44.951698 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:30:44.951709 | orchestrator | 2026-03-26 02:30:44.951720 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-03-26 02:30:44.951732 | orchestrator | Thursday 26 March 2026 02:30:37 +0000 (0:00:00.480) 0:04:54.553 ******** 2026-03-26 02:30:44.951743 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:30:44.951754 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:30:44.951765 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:30:44.951776 | orchestrator | 2026-03-26 02:30:44.951788 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-03-26 02:30:44.951799 | orchestrator | Thursday 26 March 2026 02:30:38 +0000 (0:00:01.393) 0:04:55.946 ******** 2026-03-26 02:30:44.951810 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 02:30:44.951821 | orchestrator | 2026-03-26 02:30:44.951832 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-03-26 02:30:44.951843 | orchestrator | Thursday 26 March 2026 02:30:40 +0000 (0:00:01.863) 0:04:57.810 ******** 2026-03-26 02:30:44.951858 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-26 02:30:44.951905 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-26 02:30:44.951962 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-26 02:30:44.951976 | orchestrator | 2026-03-26 02:30:44.951988 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-03-26 02:30:44.952019 | orchestrator | Thursday 26 March 2026 02:30:43 +0000 (0:00:02.204) 0:05:00.014 ******** 2026-03-26 02:30:44.952039 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-26 02:30:44.952062 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-26 02:30:44.952076 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:30:44.952089 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:30:44.952102 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-26 02:30:44.952115 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:30:44.952128 | orchestrator | 2026-03-26 02:30:44.952141 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-03-26 02:30:44.952155 | orchestrator | Thursday 26 March 2026 02:30:43 +0000 (0:00:00.433) 0:05:00.448 ******** 2026-03-26 02:30:44.952169 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-26 02:30:44.952183 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:30:44.952197 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-26 02:30:44.952210 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:30:44.952222 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-26 02:30:44.952298 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:30:44.952311 | orchestrator | 2026-03-26 02:30:44.952323 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-03-26 02:30:44.952334 | orchestrator | Thursday 26 March 2026 02:30:44 +0000 (0:00:01.000) 0:05:01.448 ******** 2026-03-26 02:30:44.952352 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:30:55.712671 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:30:55.712821 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:30:55.712849 | orchestrator | 2026-03-26 02:30:55.712869 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-03-26 02:30:55.712888 | orchestrator | Thursday 26 March 2026 02:30:44 +0000 (0:00:00.502) 0:05:01.951 ******** 2026-03-26 02:30:55.712905 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:30:55.712954 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:30:55.712973 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:30:55.712991 | orchestrator | 2026-03-26 02:30:55.713009 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-03-26 02:30:55.713027 | orchestrator | Thursday 26 March 2026 02:30:46 +0000 (0:00:01.439) 0:05:03.390 ******** 2026-03-26 02:30:55.713045 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 02:30:55.713064 | orchestrator | 2026-03-26 02:30:55.713081 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-03-26 02:30:55.713099 | orchestrator | Thursday 26 March 2026 02:30:47 +0000 (0:00:01.611) 0:05:05.002 ******** 2026-03-26 02:30:55.713141 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-26 02:30:55.713171 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-26 02:30:55.713191 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-26 02:30:55.713271 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-26 02:30:55.713308 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-26 02:30:55.713322 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-26 02:30:55.713335 | orchestrator | 2026-03-26 02:30:55.713349 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-03-26 02:30:55.713362 | orchestrator | Thursday 26 March 2026 02:30:54 +0000 (0:00:06.936) 0:05:11.938 ******** 2026-03-26 02:30:55.713374 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-26 02:30:55.713397 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-26 02:31:01.793355 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:31:01.793487 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-26 02:31:01.793510 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-26 02:31:01.793533 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:31:01.793552 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-26 02:31:01.793573 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-26 02:31:01.793622 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:31:01.793639 | orchestrator | 2026-03-26 02:31:01.793651 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-03-26 02:31:01.793665 | orchestrator | Thursday 26 March 2026 02:30:55 +0000 (0:00:00.770) 0:05:12.709 ******** 2026-03-26 02:31:01.793694 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-26 02:31:01.793708 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-26 02:31:01.793722 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-26 02:31:01.793739 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-26 02:31:01.793751 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:31:01.793763 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-26 02:31:01.793774 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-26 02:31:01.793786 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-26 02:31:01.793797 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-26 02:31:01.793809 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:31:01.793820 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-26 02:31:01.793831 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-26 02:31:01.793845 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-26 02:31:01.793859 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-26 02:31:01.793872 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:31:01.793885 | orchestrator | 2026-03-26 02:31:01.793906 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-03-26 02:31:01.793919 | orchestrator | Thursday 26 March 2026 02:30:56 +0000 (0:00:00.933) 0:05:13.642 ******** 2026-03-26 02:31:01.793933 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:31:01.793946 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:31:01.793959 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:31:01.793972 | orchestrator | 2026-03-26 02:31:01.793985 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-03-26 02:31:01.793998 | orchestrator | Thursday 26 March 2026 02:30:58 +0000 (0:00:01.400) 0:05:15.043 ******** 2026-03-26 02:31:01.794010 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:31:01.794087 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:31:01.794101 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:31:01.794122 | orchestrator | 2026-03-26 02:31:01.794141 | orchestrator | TASK [include_role : swift] **************************************************** 2026-03-26 02:31:01.794162 | orchestrator | Thursday 26 March 2026 02:31:00 +0000 (0:00:02.290) 0:05:17.333 ******** 2026-03-26 02:31:01.794183 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:31:01.794202 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:31:01.794220 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:31:01.794235 | orchestrator | 2026-03-26 02:31:01.794268 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-03-26 02:31:01.794280 | orchestrator | Thursday 26 March 2026 02:31:00 +0000 (0:00:00.665) 0:05:17.999 ******** 2026-03-26 02:31:01.794291 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:31:01.794302 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:31:01.794313 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:31:01.794324 | orchestrator | 2026-03-26 02:31:01.794336 | orchestrator | TASK [include_role : trove] **************************************************** 2026-03-26 02:31:01.794347 | orchestrator | Thursday 26 March 2026 02:31:01 +0000 (0:00:00.356) 0:05:18.356 ******** 2026-03-26 02:31:01.794358 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:31:01.794377 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:31:45.653381 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:31:45.653486 | orchestrator | 2026-03-26 02:31:45.653501 | orchestrator | TASK [include_role : venus] **************************************************** 2026-03-26 02:31:45.653512 | orchestrator | Thursday 26 March 2026 02:31:01 +0000 (0:00:00.438) 0:05:18.794 ******** 2026-03-26 02:31:45.653521 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:31:45.653531 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:31:45.653545 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:31:45.653559 | orchestrator | 2026-03-26 02:31:45.653573 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-03-26 02:31:45.653586 | orchestrator | Thursday 26 March 2026 02:31:02 +0000 (0:00:00.337) 0:05:19.132 ******** 2026-03-26 02:31:45.653600 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:31:45.653613 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:31:45.653626 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:31:45.653639 | orchestrator | 2026-03-26 02:31:45.653654 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-03-26 02:31:45.653685 | orchestrator | Thursday 26 March 2026 02:31:02 +0000 (0:00:00.680) 0:05:19.812 ******** 2026-03-26 02:31:45.653701 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:31:45.653716 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:31:45.653729 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:31:45.653743 | orchestrator | 2026-03-26 02:31:45.653752 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-03-26 02:31:45.653760 | orchestrator | Thursday 26 March 2026 02:31:03 +0000 (0:00:00.602) 0:05:20.414 ******** 2026-03-26 02:31:45.653768 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:31:45.653778 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:31:45.653786 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:31:45.653794 | orchestrator | 2026-03-26 02:31:45.653802 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-03-26 02:31:45.653853 | orchestrator | Thursday 26 March 2026 02:31:04 +0000 (0:00:00.716) 0:05:21.130 ******** 2026-03-26 02:31:45.653863 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:31:45.653871 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:31:45.653878 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:31:45.653886 | orchestrator | 2026-03-26 02:31:45.653895 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-03-26 02:31:45.653904 | orchestrator | Thursday 26 March 2026 02:31:04 +0000 (0:00:00.729) 0:05:21.860 ******** 2026-03-26 02:31:45.653913 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:31:45.653923 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:31:45.653932 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:31:45.653941 | orchestrator | 2026-03-26 02:31:45.653950 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-03-26 02:31:45.653959 | orchestrator | Thursday 26 March 2026 02:31:05 +0000 (0:00:00.892) 0:05:22.753 ******** 2026-03-26 02:31:45.653968 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:31:45.653977 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:31:45.653986 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:31:45.653995 | orchestrator | 2026-03-26 02:31:45.654004 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-03-26 02:31:45.654065 | orchestrator | Thursday 26 March 2026 02:31:06 +0000 (0:00:00.969) 0:05:23.722 ******** 2026-03-26 02:31:45.654076 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:31:45.654085 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:31:45.654094 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:31:45.654104 | orchestrator | 2026-03-26 02:31:45.654113 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-03-26 02:31:45.654122 | orchestrator | Thursday 26 March 2026 02:31:07 +0000 (0:00:00.842) 0:05:24.565 ******** 2026-03-26 02:31:45.654130 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:31:45.654138 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:31:45.654146 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:31:45.654154 | orchestrator | 2026-03-26 02:31:45.654162 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-03-26 02:31:45.654170 | orchestrator | Thursday 26 March 2026 02:31:12 +0000 (0:00:05.397) 0:05:29.962 ******** 2026-03-26 02:31:45.654178 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:31:45.654186 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:31:45.654194 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:31:45.654202 | orchestrator | 2026-03-26 02:31:45.654210 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-03-26 02:31:45.654218 | orchestrator | Thursday 26 March 2026 02:31:16 +0000 (0:00:03.222) 0:05:33.185 ******** 2026-03-26 02:31:45.654226 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:31:45.654234 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:31:45.654242 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:31:45.654250 | orchestrator | 2026-03-26 02:31:45.654258 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-03-26 02:31:45.654312 | orchestrator | Thursday 26 March 2026 02:31:27 +0000 (0:00:10.974) 0:05:44.159 ******** 2026-03-26 02:31:45.654322 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:31:45.654330 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:31:45.654337 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:31:45.654345 | orchestrator | 2026-03-26 02:31:45.654353 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-03-26 02:31:45.654361 | orchestrator | Thursday 26 March 2026 02:31:31 +0000 (0:00:04.647) 0:05:48.807 ******** 2026-03-26 02:31:45.654369 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:31:45.654377 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:31:45.654387 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:31:45.654401 | orchestrator | 2026-03-26 02:31:45.654415 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-03-26 02:31:45.654429 | orchestrator | Thursday 26 March 2026 02:31:36 +0000 (0:00:04.277) 0:05:53.085 ******** 2026-03-26 02:31:45.654457 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:31:45.654471 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:31:45.654486 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:31:45.654500 | orchestrator | 2026-03-26 02:31:45.654509 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-03-26 02:31:45.654517 | orchestrator | Thursday 26 March 2026 02:31:36 +0000 (0:00:00.761) 0:05:53.847 ******** 2026-03-26 02:31:45.654525 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:31:45.654533 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:31:45.654541 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:31:45.654549 | orchestrator | 2026-03-26 02:31:45.654576 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-03-26 02:31:45.654585 | orchestrator | Thursday 26 March 2026 02:31:37 +0000 (0:00:00.403) 0:05:54.251 ******** 2026-03-26 02:31:45.654593 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:31:45.654601 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:31:45.654608 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:31:45.654616 | orchestrator | 2026-03-26 02:31:45.654624 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-03-26 02:31:45.654633 | orchestrator | Thursday 26 March 2026 02:31:37 +0000 (0:00:00.388) 0:05:54.639 ******** 2026-03-26 02:31:45.654647 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:31:45.654660 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:31:45.654673 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:31:45.654686 | orchestrator | 2026-03-26 02:31:45.654700 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-03-26 02:31:45.654714 | orchestrator | Thursday 26 March 2026 02:31:38 +0000 (0:00:00.398) 0:05:55.038 ******** 2026-03-26 02:31:45.654728 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:31:45.654751 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:31:45.654765 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:31:45.654778 | orchestrator | 2026-03-26 02:31:45.654791 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-03-26 02:31:45.654806 | orchestrator | Thursday 26 March 2026 02:31:38 +0000 (0:00:00.737) 0:05:55.775 ******** 2026-03-26 02:31:45.654818 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:31:45.654831 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:31:45.654843 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:31:45.654856 | orchestrator | 2026-03-26 02:31:45.654870 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-03-26 02:31:45.654883 | orchestrator | Thursday 26 March 2026 02:31:39 +0000 (0:00:00.373) 0:05:56.148 ******** 2026-03-26 02:31:45.654897 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:31:45.654910 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:31:45.654924 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:31:45.654932 | orchestrator | 2026-03-26 02:31:45.654940 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-03-26 02:31:45.654948 | orchestrator | Thursday 26 March 2026 02:31:43 +0000 (0:00:04.783) 0:06:00.932 ******** 2026-03-26 02:31:45.654956 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:31:45.654964 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:31:45.654972 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:31:45.654980 | orchestrator | 2026-03-26 02:31:45.654988 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-26 02:31:45.654997 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-03-26 02:31:45.655007 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-03-26 02:31:45.655015 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-03-26 02:31:45.655023 | orchestrator | 2026-03-26 02:31:45.655039 | orchestrator | 2026-03-26 02:31:45.655047 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-26 02:31:45.655055 | orchestrator | Thursday 26 March 2026 02:31:44 +0000 (0:00:00.819) 0:06:01.751 ******** 2026-03-26 02:31:45.655063 | orchestrator | =============================================================================== 2026-03-26 02:31:45.655070 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 10.97s 2026-03-26 02:31:45.655078 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.94s 2026-03-26 02:31:45.655086 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 5.40s 2026-03-26 02:31:45.655094 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.35s 2026-03-26 02:31:45.655102 | orchestrator | loadbalancer : Wait for haproxy to listen on VIP ------------------------ 4.78s 2026-03-26 02:31:45.655110 | orchestrator | loadbalancer : Wait for backup proxysql to start ------------------------ 4.65s 2026-03-26 02:31:45.655118 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 4.52s 2026-03-26 02:31:45.655126 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.36s 2026-03-26 02:31:45.655134 | orchestrator | haproxy-config : Configuring firewall for glance ------------------------ 4.35s 2026-03-26 02:31:45.655141 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 4.28s 2026-03-26 02:31:45.655149 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 4.20s 2026-03-26 02:31:45.655157 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.19s 2026-03-26 02:31:45.655165 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 4.09s 2026-03-26 02:31:45.655173 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 3.96s 2026-03-26 02:31:45.655181 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 3.72s 2026-03-26 02:31:45.655189 | orchestrator | haproxy-config : Copying over placement haproxy config ------------------ 3.61s 2026-03-26 02:31:45.655197 | orchestrator | haproxy-config : Copying over magnum haproxy config --------------------- 3.55s 2026-03-26 02:31:45.655205 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 3.49s 2026-03-26 02:31:45.655212 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 3.40s 2026-03-26 02:31:45.655220 | orchestrator | service-cert-copy : loadbalancer | Copying over extra CA certificates --- 3.35s 2026-03-26 02:31:48.131477 | orchestrator | 2026-03-26 02:31:48 | INFO  | Task d9981432-5afc-4da2-8289-2d0601443640 (opensearch) was prepared for execution. 2026-03-26 02:31:48.131582 | orchestrator | 2026-03-26 02:31:48 | INFO  | It takes a moment until task d9981432-5afc-4da2-8289-2d0601443640 (opensearch) has been started and output is visible here. 2026-03-26 02:31:59.297000 | orchestrator | 2026-03-26 02:31:59.297113 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-26 02:31:59.297129 | orchestrator | 2026-03-26 02:31:59.297140 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-26 02:31:59.297151 | orchestrator | Thursday 26 March 2026 02:31:52 +0000 (0:00:00.269) 0:00:00.269 ******** 2026-03-26 02:31:59.297161 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:31:59.297173 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:31:59.297182 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:31:59.297226 | orchestrator | 2026-03-26 02:31:59.297237 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-26 02:31:59.297247 | orchestrator | Thursday 26 March 2026 02:31:52 +0000 (0:00:00.298) 0:00:00.568 ******** 2026-03-26 02:31:59.297325 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-03-26 02:31:59.297339 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-03-26 02:31:59.297349 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-03-26 02:31:59.297359 | orchestrator | 2026-03-26 02:31:59.297369 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-03-26 02:31:59.297400 | orchestrator | 2026-03-26 02:31:59.297410 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-26 02:31:59.297420 | orchestrator | Thursday 26 March 2026 02:31:53 +0000 (0:00:00.447) 0:00:01.015 ******** 2026-03-26 02:31:59.297431 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 02:31:59.297440 | orchestrator | 2026-03-26 02:31:59.297450 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-03-26 02:31:59.297460 | orchestrator | Thursday 26 March 2026 02:31:53 +0000 (0:00:00.538) 0:00:01.554 ******** 2026-03-26 02:31:59.297470 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-26 02:31:59.297479 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-26 02:31:59.297489 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-26 02:31:59.297499 | orchestrator | 2026-03-26 02:31:59.297509 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-03-26 02:31:59.297521 | orchestrator | Thursday 26 March 2026 02:31:54 +0000 (0:00:00.667) 0:00:02.221 ******** 2026-03-26 02:31:59.297537 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-26 02:31:59.297552 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-26 02:31:59.297582 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-26 02:31:59.297602 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-26 02:31:59.297624 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-26 02:31:59.297638 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-26 02:31:59.297650 | orchestrator | 2026-03-26 02:31:59.297661 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-26 02:31:59.297673 | orchestrator | Thursday 26 March 2026 02:31:56 +0000 (0:00:01.732) 0:00:03.953 ******** 2026-03-26 02:31:59.297684 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 02:31:59.297695 | orchestrator | 2026-03-26 02:31:59.297706 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-03-26 02:31:59.297718 | orchestrator | Thursday 26 March 2026 02:31:56 +0000 (0:00:00.571) 0:00:04.525 ******** 2026-03-26 02:31:59.297742 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-26 02:32:00.116715 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-26 02:32:00.116815 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-26 02:32:00.116830 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-26 02:32:00.116839 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-26 02:32:00.116897 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-26 02:32:00.116908 | orchestrator | 2026-03-26 02:32:00.116917 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-03-26 02:32:00.116926 | orchestrator | Thursday 26 March 2026 02:31:59 +0000 (0:00:02.402) 0:00:06.927 ******** 2026-03-26 02:32:00.116935 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-26 02:32:00.116942 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-26 02:32:00.116950 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:32:00.116958 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-26 02:32:00.116981 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-26 02:32:01.212058 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:32:01.212168 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-26 02:32:01.212182 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-26 02:32:01.212190 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:32:01.212197 | orchestrator | 2026-03-26 02:32:01.212204 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-03-26 02:32:01.212213 | orchestrator | Thursday 26 March 2026 02:32:00 +0000 (0:00:00.819) 0:00:07.747 ******** 2026-03-26 02:32:01.212238 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-26 02:32:01.212258 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-26 02:32:01.212364 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:32:01.212374 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-26 02:32:01.212381 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-26 02:32:01.212388 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:32:01.212400 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-26 02:32:01.212434 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-26 02:32:01.212441 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:32:01.212448 | orchestrator | 2026-03-26 02:32:01.212454 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-03-26 02:32:01.212466 | orchestrator | Thursday 26 March 2026 02:32:01 +0000 (0:00:01.091) 0:00:08.838 ******** 2026-03-26 02:32:09.664954 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-26 02:32:09.665069 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-26 02:32:09.665094 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-26 02:32:09.665168 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-26 02:32:09.665215 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-26 02:32:09.665239 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-26 02:32:09.665271 | orchestrator | 2026-03-26 02:32:09.665323 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-03-26 02:32:09.665337 | orchestrator | Thursday 26 March 2026 02:32:03 +0000 (0:00:02.510) 0:00:11.349 ******** 2026-03-26 02:32:09.665349 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:32:09.665361 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:32:09.665372 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:32:09.665384 | orchestrator | 2026-03-26 02:32:09.665395 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-03-26 02:32:09.665406 | orchestrator | Thursday 26 March 2026 02:32:06 +0000 (0:00:02.379) 0:00:13.729 ******** 2026-03-26 02:32:09.665417 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:32:09.665428 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:32:09.665439 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:32:09.665450 | orchestrator | 2026-03-26 02:32:09.665461 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2026-03-26 02:32:09.665472 | orchestrator | Thursday 26 March 2026 02:32:07 +0000 (0:00:01.915) 0:00:15.644 ******** 2026-03-26 02:32:09.665484 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-26 02:32:09.665504 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-26 02:32:09.665526 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-26 02:34:57.468546 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-26 02:34:57.468679 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-26 02:34:57.468719 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-26 02:34:57.468736 | orchestrator | 2026-03-26 02:34:57.468751 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-26 02:34:57.468767 | orchestrator | Thursday 26 March 2026 02:32:09 +0000 (0:00:01.657) 0:00:17.302 ******** 2026-03-26 02:34:57.468781 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:34:57.468796 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:34:57.468808 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:34:57.468821 | orchestrator | 2026-03-26 02:34:57.468835 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-26 02:34:57.468850 | orchestrator | Thursday 26 March 2026 02:32:09 +0000 (0:00:00.299) 0:00:17.601 ******** 2026-03-26 02:34:57.468863 | orchestrator | 2026-03-26 02:34:57.468876 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-26 02:34:57.468889 | orchestrator | Thursday 26 March 2026 02:32:10 +0000 (0:00:00.080) 0:00:17.682 ******** 2026-03-26 02:34:57.468903 | orchestrator | 2026-03-26 02:34:57.468917 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-26 02:34:57.468942 | orchestrator | Thursday 26 March 2026 02:32:10 +0000 (0:00:00.067) 0:00:17.750 ******** 2026-03-26 02:34:57.468956 | orchestrator | 2026-03-26 02:34:57.468969 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-03-26 02:34:57.469004 | orchestrator | Thursday 26 March 2026 02:32:10 +0000 (0:00:00.090) 0:00:17.840 ******** 2026-03-26 02:34:57.469019 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:34:57.469033 | orchestrator | 2026-03-26 02:34:57.469047 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-03-26 02:34:57.469062 | orchestrator | Thursday 26 March 2026 02:32:10 +0000 (0:00:00.226) 0:00:18.066 ******** 2026-03-26 02:34:57.469076 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:34:57.469091 | orchestrator | 2026-03-26 02:34:57.469106 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-03-26 02:34:57.469120 | orchestrator | Thursday 26 March 2026 02:32:11 +0000 (0:00:00.683) 0:00:18.750 ******** 2026-03-26 02:34:57.469134 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:34:57.469149 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:34:57.469164 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:34:57.469178 | orchestrator | 2026-03-26 02:34:57.469192 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-03-26 02:34:57.469208 | orchestrator | Thursday 26 March 2026 02:33:22 +0000 (0:01:10.951) 0:01:29.701 ******** 2026-03-26 02:34:57.469223 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:34:57.469237 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:34:57.469251 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:34:57.469265 | orchestrator | 2026-03-26 02:34:57.469279 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-26 02:34:57.469293 | orchestrator | Thursday 26 March 2026 02:34:46 +0000 (0:01:24.336) 0:02:54.038 ******** 2026-03-26 02:34:57.469309 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 02:34:57.469324 | orchestrator | 2026-03-26 02:34:57.469339 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-03-26 02:34:57.469354 | orchestrator | Thursday 26 March 2026 02:34:46 +0000 (0:00:00.522) 0:02:54.561 ******** 2026-03-26 02:34:57.469393 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:34:57.469410 | orchestrator | 2026-03-26 02:34:57.469425 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-03-26 02:34:57.469439 | orchestrator | Thursday 26 March 2026 02:34:49 +0000 (0:00:02.979) 0:02:57.540 ******** 2026-03-26 02:34:57.469453 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:34:57.469467 | orchestrator | 2026-03-26 02:34:57.469481 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-03-26 02:34:57.469495 | orchestrator | Thursday 26 March 2026 02:34:52 +0000 (0:00:02.245) 0:02:59.786 ******** 2026-03-26 02:34:57.469509 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:34:57.469523 | orchestrator | 2026-03-26 02:34:57.469537 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-03-26 02:34:57.469551 | orchestrator | Thursday 26 March 2026 02:34:54 +0000 (0:00:02.731) 0:03:02.518 ******** 2026-03-26 02:34:57.469564 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:34:57.469577 | orchestrator | 2026-03-26 02:34:57.469591 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-26 02:34:57.469607 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-26 02:34:57.469622 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-26 02:34:57.469648 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-26 02:34:57.469663 | orchestrator | 2026-03-26 02:34:57.469676 | orchestrator | 2026-03-26 02:34:57.469704 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-26 02:34:57.469719 | orchestrator | Thursday 26 March 2026 02:34:57 +0000 (0:00:02.568) 0:03:05.086 ******** 2026-03-26 02:34:57.469733 | orchestrator | =============================================================================== 2026-03-26 02:34:57.469746 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 84.34s 2026-03-26 02:34:57.469761 | orchestrator | opensearch : Restart opensearch container ------------------------------ 70.95s 2026-03-26 02:34:57.469775 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.98s 2026-03-26 02:34:57.469788 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.73s 2026-03-26 02:34:57.469802 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.57s 2026-03-26 02:34:57.469815 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.51s 2026-03-26 02:34:57.469829 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.40s 2026-03-26 02:34:57.469842 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 2.38s 2026-03-26 02:34:57.469857 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.25s 2026-03-26 02:34:57.469872 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.92s 2026-03-26 02:34:57.469885 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.73s 2026-03-26 02:34:57.469899 | orchestrator | opensearch : Check opensearch containers -------------------------------- 1.66s 2026-03-26 02:34:57.469913 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.09s 2026-03-26 02:34:57.469927 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 0.82s 2026-03-26 02:34:57.469941 | orchestrator | opensearch : Perform a flush -------------------------------------------- 0.68s 2026-03-26 02:34:57.469954 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.67s 2026-03-26 02:34:57.469980 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.57s 2026-03-26 02:34:57.827625 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.54s 2026-03-26 02:34:57.827700 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.52s 2026-03-26 02:34:57.827708 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.45s 2026-03-26 02:35:00.253405 | orchestrator | 2026-03-26 02:35:00 | INFO  | Task e3d112e1-7037-4de8-b783-8f3076c4424e (memcached) was prepared for execution. 2026-03-26 02:35:00.253474 | orchestrator | 2026-03-26 02:35:00 | INFO  | It takes a moment until task e3d112e1-7037-4de8-b783-8f3076c4424e (memcached) has been started and output is visible here. 2026-03-26 02:35:12.586654 | orchestrator | 2026-03-26 02:35:12.586730 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-26 02:35:12.586737 | orchestrator | 2026-03-26 02:35:12.586742 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-26 02:35:12.586747 | orchestrator | Thursday 26 March 2026 02:35:04 +0000 (0:00:00.281) 0:00:00.281 ******** 2026-03-26 02:35:12.586751 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:35:12.586757 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:35:12.586761 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:35:12.586765 | orchestrator | 2026-03-26 02:35:12.586769 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-26 02:35:12.586773 | orchestrator | Thursday 26 March 2026 02:35:04 +0000 (0:00:00.362) 0:00:00.644 ******** 2026-03-26 02:35:12.586777 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-03-26 02:35:12.586781 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-03-26 02:35:12.586785 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-03-26 02:35:12.586789 | orchestrator | 2026-03-26 02:35:12.586793 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-03-26 02:35:12.586815 | orchestrator | 2026-03-26 02:35:12.586819 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-03-26 02:35:12.586823 | orchestrator | Thursday 26 March 2026 02:35:05 +0000 (0:00:00.469) 0:00:01.113 ******** 2026-03-26 02:35:12.586827 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 02:35:12.586832 | orchestrator | 2026-03-26 02:35:12.586836 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-03-26 02:35:12.586839 | orchestrator | Thursday 26 March 2026 02:35:05 +0000 (0:00:00.498) 0:00:01.611 ******** 2026-03-26 02:35:12.586843 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-03-26 02:35:12.586847 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-03-26 02:35:12.586851 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-03-26 02:35:12.586855 | orchestrator | 2026-03-26 02:35:12.586858 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-03-26 02:35:12.586862 | orchestrator | Thursday 26 March 2026 02:35:06 +0000 (0:00:00.681) 0:00:02.293 ******** 2026-03-26 02:35:12.586866 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-03-26 02:35:12.586870 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-03-26 02:35:12.586873 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-03-26 02:35:12.586877 | orchestrator | 2026-03-26 02:35:12.586881 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2026-03-26 02:35:12.586885 | orchestrator | Thursday 26 March 2026 02:35:08 +0000 (0:00:01.774) 0:00:04.067 ******** 2026-03-26 02:35:12.586898 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:35:12.586902 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:35:12.586906 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:35:12.586910 | orchestrator | 2026-03-26 02:35:12.586913 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-03-26 02:35:12.586917 | orchestrator | Thursday 26 March 2026 02:35:09 +0000 (0:00:01.577) 0:00:05.645 ******** 2026-03-26 02:35:12.586921 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:35:12.586925 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:35:12.586928 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:35:12.586932 | orchestrator | 2026-03-26 02:35:12.586936 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-26 02:35:12.586940 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-26 02:35:12.586944 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-26 02:35:12.586948 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-26 02:35:12.586952 | orchestrator | 2026-03-26 02:35:12.586956 | orchestrator | 2026-03-26 02:35:12.586959 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-26 02:35:12.586963 | orchestrator | Thursday 26 March 2026 02:35:12 +0000 (0:00:02.122) 0:00:07.767 ******** 2026-03-26 02:35:12.586967 | orchestrator | =============================================================================== 2026-03-26 02:35:12.586971 | orchestrator | memcached : Restart memcached container --------------------------------- 2.12s 2026-03-26 02:35:12.586975 | orchestrator | memcached : Copying over config.json files for services ----------------- 1.77s 2026-03-26 02:35:12.586979 | orchestrator | memcached : Check memcached container ----------------------------------- 1.58s 2026-03-26 02:35:12.586982 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.68s 2026-03-26 02:35:12.586986 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.50s 2026-03-26 02:35:12.586990 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.47s 2026-03-26 02:35:12.586994 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.36s 2026-03-26 02:35:15.102887 | orchestrator | 2026-03-26 02:35:15 | INFO  | Task 3a877614-1e0e-4232-a1a6-b271c8628b0b (redis) was prepared for execution. 2026-03-26 02:35:15.103020 | orchestrator | 2026-03-26 02:35:15 | INFO  | It takes a moment until task 3a877614-1e0e-4232-a1a6-b271c8628b0b (redis) has been started and output is visible here. 2026-03-26 02:35:24.287595 | orchestrator | 2026-03-26 02:35:24.287697 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-26 02:35:24.287709 | orchestrator | 2026-03-26 02:35:24.287716 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-26 02:35:24.287724 | orchestrator | Thursday 26 March 2026 02:35:19 +0000 (0:00:00.275) 0:00:00.275 ******** 2026-03-26 02:35:24.287730 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:35:24.287738 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:35:24.287744 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:35:24.287751 | orchestrator | 2026-03-26 02:35:24.287757 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-26 02:35:24.287764 | orchestrator | Thursday 26 March 2026 02:35:19 +0000 (0:00:00.333) 0:00:00.609 ******** 2026-03-26 02:35:24.287771 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-03-26 02:35:24.287778 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-03-26 02:35:24.287785 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-03-26 02:35:24.287791 | orchestrator | 2026-03-26 02:35:24.287797 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-03-26 02:35:24.287803 | orchestrator | 2026-03-26 02:35:24.287810 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-03-26 02:35:24.287816 | orchestrator | Thursday 26 March 2026 02:35:20 +0000 (0:00:00.433) 0:00:01.042 ******** 2026-03-26 02:35:24.287822 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 02:35:24.287830 | orchestrator | 2026-03-26 02:35:24.287837 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-03-26 02:35:24.287843 | orchestrator | Thursday 26 March 2026 02:35:20 +0000 (0:00:00.491) 0:00:01.534 ******** 2026-03-26 02:35:24.287854 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-26 02:35:24.287866 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-26 02:35:24.287874 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-26 02:35:24.287905 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-26 02:35:24.287933 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-26 02:35:24.287940 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-26 02:35:24.287946 | orchestrator | 2026-03-26 02:35:24.287953 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-03-26 02:35:24.287959 | orchestrator | Thursday 26 March 2026 02:35:21 +0000 (0:00:01.076) 0:00:02.610 ******** 2026-03-26 02:35:24.287965 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-26 02:35:24.288013 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-26 02:35:24.288021 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-26 02:35:24.288034 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-26 02:35:24.288048 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-26 02:35:28.519908 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-26 02:35:28.520019 | orchestrator | 2026-03-26 02:35:28.520037 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-03-26 02:35:28.520051 | orchestrator | Thursday 26 March 2026 02:35:24 +0000 (0:00:02.523) 0:00:05.134 ******** 2026-03-26 02:35:28.520065 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-26 02:35:28.520097 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-26 02:35:28.520109 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-26 02:35:28.520147 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-26 02:35:28.520159 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-26 02:35:28.520189 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-26 02:35:28.520201 | orchestrator | 2026-03-26 02:35:28.520213 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2026-03-26 02:35:28.520224 | orchestrator | Thursday 26 March 2026 02:35:26 +0000 (0:00:02.558) 0:00:07.692 ******** 2026-03-26 02:35:28.520236 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-26 02:35:28.520248 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-26 02:35:28.520273 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-26 02:35:28.520293 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-26 02:35:28.520305 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-26 02:35:28.520325 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-26 02:35:44.903974 | orchestrator | 2026-03-26 02:35:44.904074 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-26 02:35:44.904090 | orchestrator | Thursday 26 March 2026 02:35:28 +0000 (0:00:01.467) 0:00:09.159 ******** 2026-03-26 02:35:44.904102 | orchestrator | 2026-03-26 02:35:44.904113 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-26 02:35:44.904124 | orchestrator | Thursday 26 March 2026 02:35:28 +0000 (0:00:00.068) 0:00:09.227 ******** 2026-03-26 02:35:44.904135 | orchestrator | 2026-03-26 02:35:44.904146 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-26 02:35:44.904158 | orchestrator | Thursday 26 March 2026 02:35:28 +0000 (0:00:00.069) 0:00:09.297 ******** 2026-03-26 02:35:44.904169 | orchestrator | 2026-03-26 02:35:44.904180 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-03-26 02:35:44.904191 | orchestrator | Thursday 26 March 2026 02:35:28 +0000 (0:00:00.067) 0:00:09.364 ******** 2026-03-26 02:35:44.904203 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:35:44.904215 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:35:44.904226 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:35:44.904236 | orchestrator | 2026-03-26 02:35:44.904247 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-03-26 02:35:44.904258 | orchestrator | Thursday 26 March 2026 02:35:36 +0000 (0:00:07.569) 0:00:16.934 ******** 2026-03-26 02:35:44.904298 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:35:44.904310 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:35:44.904321 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:35:44.904332 | orchestrator | 2026-03-26 02:35:44.904343 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-26 02:35:44.904354 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-26 02:35:44.904367 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-26 02:35:44.904426 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-26 02:35:44.904440 | orchestrator | 2026-03-26 02:35:44.904451 | orchestrator | 2026-03-26 02:35:44.904463 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-26 02:35:44.904482 | orchestrator | Thursday 26 March 2026 02:35:44 +0000 (0:00:08.438) 0:00:25.373 ******** 2026-03-26 02:35:44.904500 | orchestrator | =============================================================================== 2026-03-26 02:35:44.904519 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 8.44s 2026-03-26 02:35:44.904536 | orchestrator | redis : Restart redis container ----------------------------------------- 7.57s 2026-03-26 02:35:44.904554 | orchestrator | redis : Copying over redis config files --------------------------------- 2.56s 2026-03-26 02:35:44.904572 | orchestrator | redis : Copying over default config.json files -------------------------- 2.52s 2026-03-26 02:35:44.904590 | orchestrator | redis : Check redis containers ------------------------------------------ 1.47s 2026-03-26 02:35:44.904609 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.08s 2026-03-26 02:35:44.904628 | orchestrator | redis : include_tasks --------------------------------------------------- 0.49s 2026-03-26 02:35:44.904650 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.43s 2026-03-26 02:35:44.904669 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.33s 2026-03-26 02:35:44.904687 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.21s 2026-03-26 02:35:47.359717 | orchestrator | 2026-03-26 02:35:47 | INFO  | Task 33359332-80a6-46d1-87b5-7308d2e3af07 (mariadb) was prepared for execution. 2026-03-26 02:35:47.359804 | orchestrator | 2026-03-26 02:35:47 | INFO  | It takes a moment until task 33359332-80a6-46d1-87b5-7308d2e3af07 (mariadb) has been started and output is visible here. 2026-03-26 02:36:01.811620 | orchestrator | 2026-03-26 02:36:01.811725 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-26 02:36:01.811740 | orchestrator | 2026-03-26 02:36:01.811751 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-26 02:36:01.811761 | orchestrator | Thursday 26 March 2026 02:35:51 +0000 (0:00:00.167) 0:00:00.167 ******** 2026-03-26 02:36:01.811771 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:36:01.811782 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:36:01.811791 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:36:01.811801 | orchestrator | 2026-03-26 02:36:01.811811 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-26 02:36:01.811822 | orchestrator | Thursday 26 March 2026 02:35:52 +0000 (0:00:00.346) 0:00:00.514 ******** 2026-03-26 02:36:01.811833 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-03-26 02:36:01.811843 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-03-26 02:36:01.811852 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-03-26 02:36:01.811862 | orchestrator | 2026-03-26 02:36:01.811872 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-03-26 02:36:01.811882 | orchestrator | 2026-03-26 02:36:01.811892 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-03-26 02:36:01.811924 | orchestrator | Thursday 26 March 2026 02:35:52 +0000 (0:00:00.577) 0:00:01.091 ******** 2026-03-26 02:36:01.811935 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-26 02:36:01.811945 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-26 02:36:01.811954 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-26 02:36:01.811964 | orchestrator | 2026-03-26 02:36:01.811974 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-26 02:36:01.811983 | orchestrator | Thursday 26 March 2026 02:35:53 +0000 (0:00:00.376) 0:00:01.468 ******** 2026-03-26 02:36:01.811994 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 02:36:01.812004 | orchestrator | 2026-03-26 02:36:01.812014 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-03-26 02:36:01.812024 | orchestrator | Thursday 26 March 2026 02:35:53 +0000 (0:00:00.568) 0:00:02.036 ******** 2026-03-26 02:36:01.812054 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-26 02:36:01.812086 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-26 02:36:01.812113 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-26 02:36:01.812125 | orchestrator | 2026-03-26 02:36:01.812135 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-03-26 02:36:01.812145 | orchestrator | Thursday 26 March 2026 02:35:56 +0000 (0:00:02.874) 0:00:04.911 ******** 2026-03-26 02:36:01.812155 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:36:01.812169 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:36:01.812180 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:36:01.812191 | orchestrator | 2026-03-26 02:36:01.812203 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-03-26 02:36:01.812214 | orchestrator | Thursday 26 March 2026 02:35:57 +0000 (0:00:00.647) 0:00:05.559 ******** 2026-03-26 02:36:01.812225 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:36:01.812236 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:36:01.812248 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:36:01.812260 | orchestrator | 2026-03-26 02:36:01.812272 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-03-26 02:36:01.812283 | orchestrator | Thursday 26 March 2026 02:35:58 +0000 (0:00:01.395) 0:00:06.954 ******** 2026-03-26 02:36:01.812304 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-26 02:36:09.573658 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-26 02:36:09.573741 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-26 02:36:09.573765 | orchestrator | 2026-03-26 02:36:09.573772 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-03-26 02:36:09.573779 | orchestrator | Thursday 26 March 2026 02:36:01 +0000 (0:00:03.262) 0:00:10.217 ******** 2026-03-26 02:36:09.573784 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:36:09.573790 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:36:09.573795 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:36:09.573799 | orchestrator | 2026-03-26 02:36:09.573804 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-03-26 02:36:09.573820 | orchestrator | Thursday 26 March 2026 02:36:02 +0000 (0:00:01.090) 0:00:11.307 ******** 2026-03-26 02:36:09.573825 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:36:09.573830 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:36:09.573834 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:36:09.573839 | orchestrator | 2026-03-26 02:36:09.573844 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-26 02:36:09.573848 | orchestrator | Thursday 26 March 2026 02:36:06 +0000 (0:00:03.848) 0:00:15.156 ******** 2026-03-26 02:36:09.573854 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 02:36:09.573858 | orchestrator | 2026-03-26 02:36:09.573863 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-03-26 02:36:09.573868 | orchestrator | Thursday 26 March 2026 02:36:07 +0000 (0:00:00.552) 0:00:15.708 ******** 2026-03-26 02:36:09.573876 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-26 02:36:09.573886 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:36:09.573896 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-26 02:36:14.422497 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:36:14.422653 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-26 02:36:14.422717 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:36:14.422740 | orchestrator | 2026-03-26 02:36:14.422758 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-03-26 02:36:14.422776 | orchestrator | Thursday 26 March 2026 02:36:09 +0000 (0:00:02.272) 0:00:17.980 ******** 2026-03-26 02:36:14.422796 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-26 02:36:14.422815 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:36:14.422868 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-26 02:36:14.422901 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:36:14.422919 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-26 02:36:14.422937 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:36:14.422952 | orchestrator | 2026-03-26 02:36:14.422969 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-03-26 02:36:14.423139 | orchestrator | Thursday 26 March 2026 02:36:12 +0000 (0:00:02.584) 0:00:20.565 ******** 2026-03-26 02:36:14.423194 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-26 02:36:17.106951 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:36:17.107062 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-26 02:36:17.107084 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:36:17.107114 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-26 02:36:17.107153 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:36:17.107166 | orchestrator | 2026-03-26 02:36:17.107178 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2026-03-26 02:36:17.107191 | orchestrator | Thursday 26 March 2026 02:36:14 +0000 (0:00:02.268) 0:00:22.833 ******** 2026-03-26 02:36:17.107221 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-26 02:36:17.107236 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-26 02:36:17.107264 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-26 02:38:32.993792 | orchestrator | 2026-03-26 02:38:32.993932 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-03-26 02:38:32.993953 | orchestrator | Thursday 26 March 2026 02:36:17 +0000 (0:00:02.683) 0:00:25.517 ******** 2026-03-26 02:38:32.993965 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:38:32.993978 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:38:32.993989 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:38:32.994000 | orchestrator | 2026-03-26 02:38:32.994011 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-03-26 02:38:32.994090 | orchestrator | Thursday 26 March 2026 02:36:17 +0000 (0:00:00.831) 0:00:26.348 ******** 2026-03-26 02:38:32.994102 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:38:32.994115 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:38:32.994125 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:38:32.994137 | orchestrator | 2026-03-26 02:38:32.994148 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-03-26 02:38:32.994159 | orchestrator | Thursday 26 March 2026 02:36:18 +0000 (0:00:00.560) 0:00:26.908 ******** 2026-03-26 02:38:32.994170 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:38:32.994191 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:38:32.994219 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:38:32.994240 | orchestrator | 2026-03-26 02:38:32.994252 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-03-26 02:38:32.994264 | orchestrator | Thursday 26 March 2026 02:36:18 +0000 (0:00:00.356) 0:00:27.265 ******** 2026-03-26 02:38:32.994276 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2026-03-26 02:38:32.994289 | orchestrator | ...ignoring 2026-03-26 02:38:32.994307 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2026-03-26 02:38:32.994324 | orchestrator | ...ignoring 2026-03-26 02:38:32.994338 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2026-03-26 02:38:32.994350 | orchestrator | ...ignoring 2026-03-26 02:38:32.994389 | orchestrator | 2026-03-26 02:38:32.994402 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-03-26 02:38:32.994414 | orchestrator | Thursday 26 March 2026 02:36:29 +0000 (0:00:10.949) 0:00:38.215 ******** 2026-03-26 02:38:32.994427 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:38:32.994440 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:38:32.994452 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:38:32.994465 | orchestrator | 2026-03-26 02:38:32.994498 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-03-26 02:38:32.994509 | orchestrator | Thursday 26 March 2026 02:36:30 +0000 (0:00:00.426) 0:00:38.641 ******** 2026-03-26 02:38:32.994520 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:38:32.994531 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:38:32.994542 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:38:32.994553 | orchestrator | 2026-03-26 02:38:32.994563 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-03-26 02:38:32.994574 | orchestrator | Thursday 26 March 2026 02:36:30 +0000 (0:00:00.685) 0:00:39.326 ******** 2026-03-26 02:38:32.994585 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:38:32.994596 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:38:32.994607 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:38:32.994617 | orchestrator | 2026-03-26 02:38:32.994643 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-03-26 02:38:32.994655 | orchestrator | Thursday 26 March 2026 02:36:31 +0000 (0:00:00.441) 0:00:39.767 ******** 2026-03-26 02:38:32.994667 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:38:32.994677 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:38:32.994688 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:38:32.994699 | orchestrator | 2026-03-26 02:38:32.994710 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-03-26 02:38:32.994722 | orchestrator | Thursday 26 March 2026 02:36:31 +0000 (0:00:00.424) 0:00:40.192 ******** 2026-03-26 02:38:32.994740 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:38:32.994760 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:38:32.994778 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:38:32.994796 | orchestrator | 2026-03-26 02:38:32.994815 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-03-26 02:38:32.994836 | orchestrator | Thursday 26 March 2026 02:36:32 +0000 (0:00:00.431) 0:00:40.623 ******** 2026-03-26 02:38:32.994854 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:38:32.994872 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:38:32.994890 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:38:32.994910 | orchestrator | 2026-03-26 02:38:32.994928 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-26 02:38:32.994945 | orchestrator | Thursday 26 March 2026 02:36:33 +0000 (0:00:00.915) 0:00:41.539 ******** 2026-03-26 02:38:32.994964 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:38:32.994982 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:38:32.995003 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2026-03-26 02:38:32.995017 | orchestrator | 2026-03-26 02:38:32.995028 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2026-03-26 02:38:32.995045 | orchestrator | Thursday 26 March 2026 02:36:33 +0000 (0:00:00.400) 0:00:41.939 ******** 2026-03-26 02:38:32.995065 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:38:32.995083 | orchestrator | 2026-03-26 02:38:32.995094 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2026-03-26 02:38:32.995105 | orchestrator | Thursday 26 March 2026 02:36:43 +0000 (0:00:10.184) 0:00:52.124 ******** 2026-03-26 02:38:32.995116 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:38:32.995127 | orchestrator | 2026-03-26 02:38:32.995138 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-26 02:38:32.995150 | orchestrator | Thursday 26 March 2026 02:36:43 +0000 (0:00:00.133) 0:00:52.257 ******** 2026-03-26 02:38:32.995161 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:38:32.995210 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:38:32.995222 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:38:32.995233 | orchestrator | 2026-03-26 02:38:32.995245 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2026-03-26 02:38:32.995256 | orchestrator | Thursday 26 March 2026 02:36:44 +0000 (0:00:01.086) 0:00:53.344 ******** 2026-03-26 02:38:32.995266 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:38:32.995277 | orchestrator | 2026-03-26 02:38:32.995288 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2026-03-26 02:38:32.995299 | orchestrator | Thursday 26 March 2026 02:36:52 +0000 (0:00:07.778) 0:01:01.123 ******** 2026-03-26 02:38:32.995310 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:38:32.995321 | orchestrator | 2026-03-26 02:38:32.995332 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2026-03-26 02:38:32.995342 | orchestrator | Thursday 26 March 2026 02:36:54 +0000 (0:00:01.580) 0:01:02.703 ******** 2026-03-26 02:38:32.995353 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:38:32.995364 | orchestrator | 2026-03-26 02:38:32.995375 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2026-03-26 02:38:32.995386 | orchestrator | Thursday 26 March 2026 02:36:56 +0000 (0:00:02.667) 0:01:05.370 ******** 2026-03-26 02:38:32.995397 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:38:32.995408 | orchestrator | 2026-03-26 02:38:32.995419 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-03-26 02:38:32.995430 | orchestrator | Thursday 26 March 2026 02:36:57 +0000 (0:00:00.144) 0:01:05.515 ******** 2026-03-26 02:38:32.995441 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:38:32.995452 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:38:32.995463 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:38:32.995494 | orchestrator | 2026-03-26 02:38:32.995505 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-03-26 02:38:32.995516 | orchestrator | Thursday 26 March 2026 02:36:57 +0000 (0:00:00.327) 0:01:05.843 ******** 2026-03-26 02:38:32.995527 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:38:32.995538 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-03-26 02:38:32.995549 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:38:32.995560 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:38:32.995571 | orchestrator | 2026-03-26 02:38:32.995581 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-03-26 02:38:32.995592 | orchestrator | skipping: no hosts matched 2026-03-26 02:38:32.995603 | orchestrator | 2026-03-26 02:38:32.995614 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-03-26 02:38:32.995625 | orchestrator | 2026-03-26 02:38:32.995636 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-26 02:38:32.995647 | orchestrator | Thursday 26 March 2026 02:36:57 +0000 (0:00:00.573) 0:01:06.416 ******** 2026-03-26 02:38:32.995658 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:38:32.995669 | orchestrator | 2026-03-26 02:38:32.995680 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-26 02:38:32.995691 | orchestrator | Thursday 26 March 2026 02:37:16 +0000 (0:00:18.300) 0:01:24.717 ******** 2026-03-26 02:38:32.995701 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:38:32.995712 | orchestrator | 2026-03-26 02:38:32.995723 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-26 02:38:32.995734 | orchestrator | Thursday 26 March 2026 02:37:32 +0000 (0:00:16.579) 0:01:41.296 ******** 2026-03-26 02:38:32.995745 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:38:32.995756 | orchestrator | 2026-03-26 02:38:32.995772 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-03-26 02:38:32.995783 | orchestrator | 2026-03-26 02:38:32.995801 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-26 02:38:32.995812 | orchestrator | Thursday 26 March 2026 02:37:35 +0000 (0:00:02.459) 0:01:43.756 ******** 2026-03-26 02:38:32.995831 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:38:32.995842 | orchestrator | 2026-03-26 02:38:32.995853 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-26 02:38:32.995864 | orchestrator | Thursday 26 March 2026 02:37:53 +0000 (0:00:18.549) 0:02:02.306 ******** 2026-03-26 02:38:32.995875 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:38:32.995886 | orchestrator | 2026-03-26 02:38:32.995897 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-26 02:38:32.995908 | orchestrator | Thursday 26 March 2026 02:38:10 +0000 (0:00:16.528) 0:02:18.834 ******** 2026-03-26 02:38:32.995919 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:38:32.995930 | orchestrator | 2026-03-26 02:38:32.995941 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-03-26 02:38:32.995951 | orchestrator | 2026-03-26 02:38:32.995962 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-26 02:38:32.995973 | orchestrator | Thursday 26 March 2026 02:38:12 +0000 (0:00:02.554) 0:02:21.389 ******** 2026-03-26 02:38:32.995984 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:38:32.995995 | orchestrator | 2026-03-26 02:38:32.996006 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-26 02:38:32.996017 | orchestrator | Thursday 26 March 2026 02:38:25 +0000 (0:00:12.060) 0:02:33.449 ******** 2026-03-26 02:38:32.996028 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:38:32.996039 | orchestrator | 2026-03-26 02:38:32.996050 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-26 02:38:32.996061 | orchestrator | Thursday 26 March 2026 02:38:29 +0000 (0:00:04.563) 0:02:38.012 ******** 2026-03-26 02:38:32.996072 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:38:32.996083 | orchestrator | 2026-03-26 02:38:32.996094 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-03-26 02:38:32.996104 | orchestrator | 2026-03-26 02:38:32.996115 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-03-26 02:38:32.996126 | orchestrator | Thursday 26 March 2026 02:38:32 +0000 (0:00:02.795) 0:02:40.808 ******** 2026-03-26 02:38:32.996137 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 02:38:32.996148 | orchestrator | 2026-03-26 02:38:32.996159 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-03-26 02:38:32.996177 | orchestrator | Thursday 26 March 2026 02:38:32 +0000 (0:00:00.583) 0:02:41.392 ******** 2026-03-26 02:38:44.680446 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:38:44.680579 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:38:44.680590 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:38:44.680598 | orchestrator | 2026-03-26 02:38:44.680606 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-03-26 02:38:44.680615 | orchestrator | Thursday 26 March 2026 02:38:35 +0000 (0:00:02.113) 0:02:43.505 ******** 2026-03-26 02:38:44.680622 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:38:44.680629 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:38:44.680636 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:38:44.680643 | orchestrator | 2026-03-26 02:38:44.680650 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-03-26 02:38:44.680657 | orchestrator | Thursday 26 March 2026 02:38:36 +0000 (0:00:01.906) 0:02:45.411 ******** 2026-03-26 02:38:44.680665 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:38:44.680672 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:38:44.680679 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:38:44.680686 | orchestrator | 2026-03-26 02:38:44.680693 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-03-26 02:38:44.680700 | orchestrator | Thursday 26 March 2026 02:38:39 +0000 (0:00:02.185) 0:02:47.596 ******** 2026-03-26 02:38:44.680708 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:38:44.680715 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:38:44.680722 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:38:44.680729 | orchestrator | 2026-03-26 02:38:44.680760 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-03-26 02:38:44.680769 | orchestrator | Thursday 26 March 2026 02:38:41 +0000 (0:00:01.850) 0:02:49.447 ******** 2026-03-26 02:38:44.680776 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:38:44.680784 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:38:44.680791 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:38:44.680798 | orchestrator | 2026-03-26 02:38:44.680805 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-03-26 02:38:44.680812 | orchestrator | Thursday 26 March 2026 02:38:43 +0000 (0:00:02.833) 0:02:52.281 ******** 2026-03-26 02:38:44.680819 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:38:44.680826 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:38:44.680833 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:38:44.680840 | orchestrator | 2026-03-26 02:38:44.680847 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-26 02:38:44.680855 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2026-03-26 02:38:44.680865 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-03-26 02:38:44.680872 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-03-26 02:38:44.680879 | orchestrator | 2026-03-26 02:38:44.680886 | orchestrator | 2026-03-26 02:38:44.680893 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-26 02:38:44.680900 | orchestrator | Thursday 26 March 2026 02:38:44 +0000 (0:00:00.435) 0:02:52.716 ******** 2026-03-26 02:38:44.680907 | orchestrator | =============================================================================== 2026-03-26 02:38:44.680926 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 36.85s 2026-03-26 02:38:44.680935 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 33.11s 2026-03-26 02:38:44.680941 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 12.06s 2026-03-26 02:38:44.680948 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.95s 2026-03-26 02:38:44.680955 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.18s 2026-03-26 02:38:44.680962 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 7.78s 2026-03-26 02:38:44.680969 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 5.01s 2026-03-26 02:38:44.680977 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 4.56s 2026-03-26 02:38:44.680983 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 3.85s 2026-03-26 02:38:44.680990 | orchestrator | mariadb : Copying over config.json files for services ------------------- 3.26s 2026-03-26 02:38:44.680997 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 2.87s 2026-03-26 02:38:44.681004 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 2.83s 2026-03-26 02:38:44.681011 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.80s 2026-03-26 02:38:44.681018 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 2.68s 2026-03-26 02:38:44.681025 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.67s 2026-03-26 02:38:44.681032 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 2.58s 2026-03-26 02:38:44.681040 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 2.27s 2026-03-26 02:38:44.681047 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 2.27s 2026-03-26 02:38:44.681053 | orchestrator | mariadb : Creating database backup user and setting permissions --------- 2.19s 2026-03-26 02:38:44.681060 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.11s 2026-03-26 02:38:47.116762 | orchestrator | 2026-03-26 02:38:47 | INFO  | Task b639b807-1d34-43fe-916c-01e997ccc45d (rabbitmq) was prepared for execution. 2026-03-26 02:38:47.116850 | orchestrator | 2026-03-26 02:38:47 | INFO  | It takes a moment until task b639b807-1d34-43fe-916c-01e997ccc45d (rabbitmq) has been started and output is visible here. 2026-03-26 02:39:00.598995 | orchestrator | 2026-03-26 02:39:00.599087 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-26 02:39:00.599096 | orchestrator | 2026-03-26 02:39:00.599100 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-26 02:39:00.599105 | orchestrator | Thursday 26 March 2026 02:38:51 +0000 (0:00:00.193) 0:00:00.193 ******** 2026-03-26 02:39:00.599109 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:39:00.599115 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:39:00.599119 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:39:00.599123 | orchestrator | 2026-03-26 02:39:00.599127 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-26 02:39:00.599131 | orchestrator | Thursday 26 March 2026 02:38:51 +0000 (0:00:00.330) 0:00:00.524 ******** 2026-03-26 02:39:00.599135 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-03-26 02:39:00.599139 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-03-26 02:39:00.599143 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-03-26 02:39:00.599147 | orchestrator | 2026-03-26 02:39:00.599151 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-03-26 02:39:00.599155 | orchestrator | 2026-03-26 02:39:00.599159 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-26 02:39:00.599163 | orchestrator | Thursday 26 March 2026 02:38:52 +0000 (0:00:00.572) 0:00:01.096 ******** 2026-03-26 02:39:00.599167 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 02:39:00.599172 | orchestrator | 2026-03-26 02:39:00.599176 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-03-26 02:39:00.599179 | orchestrator | Thursday 26 March 2026 02:38:52 +0000 (0:00:00.550) 0:00:01.646 ******** 2026-03-26 02:39:00.599183 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:39:00.599187 | orchestrator | 2026-03-26 02:39:00.599191 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-03-26 02:39:00.599195 | orchestrator | Thursday 26 March 2026 02:38:53 +0000 (0:00:00.918) 0:00:02.565 ******** 2026-03-26 02:39:00.599199 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:39:00.599203 | orchestrator | 2026-03-26 02:39:00.599207 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-03-26 02:39:00.599211 | orchestrator | Thursday 26 March 2026 02:38:54 +0000 (0:00:00.371) 0:00:02.936 ******** 2026-03-26 02:39:00.599215 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:39:00.599219 | orchestrator | 2026-03-26 02:39:00.599223 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-03-26 02:39:00.599227 | orchestrator | Thursday 26 March 2026 02:38:54 +0000 (0:00:00.433) 0:00:03.370 ******** 2026-03-26 02:39:00.599230 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:39:00.599234 | orchestrator | 2026-03-26 02:39:00.599238 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-03-26 02:39:00.599242 | orchestrator | Thursday 26 March 2026 02:38:55 +0000 (0:00:00.416) 0:00:03.787 ******** 2026-03-26 02:39:00.599246 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:39:00.599250 | orchestrator | 2026-03-26 02:39:00.599254 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-26 02:39:00.599257 | orchestrator | Thursday 26 March 2026 02:38:55 +0000 (0:00:00.609) 0:00:04.396 ******** 2026-03-26 02:39:00.599274 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 02:39:00.599293 | orchestrator | 2026-03-26 02:39:00.599297 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-03-26 02:39:00.599301 | orchestrator | Thursday 26 March 2026 02:38:56 +0000 (0:00:00.959) 0:00:05.356 ******** 2026-03-26 02:39:00.599305 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:39:00.599309 | orchestrator | 2026-03-26 02:39:00.599313 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-03-26 02:39:00.599317 | orchestrator | Thursday 26 March 2026 02:38:57 +0000 (0:00:00.781) 0:00:06.137 ******** 2026-03-26 02:39:00.599320 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:39:00.599324 | orchestrator | 2026-03-26 02:39:00.599328 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-03-26 02:39:00.599332 | orchestrator | Thursday 26 March 2026 02:38:57 +0000 (0:00:00.365) 0:00:06.502 ******** 2026-03-26 02:39:00.599335 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:39:00.599339 | orchestrator | 2026-03-26 02:39:00.599343 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-03-26 02:39:00.599347 | orchestrator | Thursday 26 March 2026 02:38:58 +0000 (0:00:00.371) 0:00:06.874 ******** 2026-03-26 02:39:00.599366 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-26 02:39:00.599373 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-26 02:39:00.599378 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-26 02:39:00.599386 | orchestrator | 2026-03-26 02:39:00.599393 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-03-26 02:39:00.599397 | orchestrator | Thursday 26 March 2026 02:38:59 +0000 (0:00:00.826) 0:00:07.700 ******** 2026-03-26 02:39:00.599401 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-26 02:39:00.599410 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-26 02:39:19.446749 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-26 02:39:19.446836 | orchestrator | 2026-03-26 02:39:19.446845 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-03-26 02:39:19.446852 | orchestrator | Thursday 26 March 2026 02:39:00 +0000 (0:00:01.567) 0:00:09.267 ******** 2026-03-26 02:39:19.446876 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-26 02:39:19.446883 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-26 02:39:19.446888 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-26 02:39:19.446893 | orchestrator | 2026-03-26 02:39:19.446898 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-03-26 02:39:19.446904 | orchestrator | Thursday 26 March 2026 02:39:01 +0000 (0:00:01.319) 0:00:10.586 ******** 2026-03-26 02:39:19.446921 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-26 02:39:19.446927 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-26 02:39:19.446932 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-26 02:39:19.446937 | orchestrator | 2026-03-26 02:39:19.446942 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-03-26 02:39:19.446948 | orchestrator | Thursday 26 March 2026 02:39:03 +0000 (0:00:01.714) 0:00:12.301 ******** 2026-03-26 02:39:19.446953 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-26 02:39:19.446958 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-26 02:39:19.446963 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-26 02:39:19.446968 | orchestrator | 2026-03-26 02:39:19.446974 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-03-26 02:39:19.446979 | orchestrator | Thursday 26 March 2026 02:39:05 +0000 (0:00:01.410) 0:00:13.711 ******** 2026-03-26 02:39:19.446984 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-26 02:39:19.446989 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-26 02:39:19.446994 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-26 02:39:19.446999 | orchestrator | 2026-03-26 02:39:19.447005 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-03-26 02:39:19.447010 | orchestrator | Thursday 26 March 2026 02:39:06 +0000 (0:00:01.755) 0:00:15.467 ******** 2026-03-26 02:39:19.447015 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-26 02:39:19.447020 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-26 02:39:19.447025 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-26 02:39:19.447030 | orchestrator | 2026-03-26 02:39:19.447035 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-03-26 02:39:19.447041 | orchestrator | Thursday 26 March 2026 02:39:08 +0000 (0:00:01.430) 0:00:16.898 ******** 2026-03-26 02:39:19.447046 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-26 02:39:19.447051 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-26 02:39:19.447057 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-26 02:39:19.447062 | orchestrator | 2026-03-26 02:39:19.447067 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-26 02:39:19.447072 | orchestrator | Thursday 26 March 2026 02:39:09 +0000 (0:00:01.454) 0:00:18.352 ******** 2026-03-26 02:39:19.447077 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:39:19.447084 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:39:19.447101 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:39:19.447111 | orchestrator | 2026-03-26 02:39:19.447116 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2026-03-26 02:39:19.447121 | orchestrator | Thursday 26 March 2026 02:39:10 +0000 (0:00:00.405) 0:00:18.757 ******** 2026-03-26 02:39:19.447128 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-26 02:39:19.447138 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-26 02:39:19.447144 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-26 02:39:19.447150 | orchestrator | 2026-03-26 02:39:19.447155 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2026-03-26 02:39:19.447160 | orchestrator | Thursday 26 March 2026 02:39:11 +0000 (0:00:01.184) 0:00:19.942 ******** 2026-03-26 02:39:19.447166 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:39:19.447171 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:39:19.447176 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:39:19.447181 | orchestrator | 2026-03-26 02:39:19.447187 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2026-03-26 02:39:19.447197 | orchestrator | Thursday 26 March 2026 02:39:12 +0000 (0:00:00.884) 0:00:20.826 ******** 2026-03-26 02:39:19.447202 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:39:19.447208 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:39:19.447213 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:39:19.447218 | orchestrator | 2026-03-26 02:39:19.447223 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-03-26 02:39:19.447232 | orchestrator | Thursday 26 March 2026 02:39:19 +0000 (0:00:07.279) 0:00:28.106 ******** 2026-03-26 02:40:50.151886 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:40:50.152002 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:40:50.152017 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:40:50.152030 | orchestrator | 2026-03-26 02:40:50.152042 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-26 02:40:50.152055 | orchestrator | 2026-03-26 02:40:50.152066 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-26 02:40:50.152077 | orchestrator | Thursday 26 March 2026 02:39:20 +0000 (0:00:00.594) 0:00:28.701 ******** 2026-03-26 02:40:50.152089 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:40:50.152101 | orchestrator | 2026-03-26 02:40:50.152113 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-26 02:40:50.152127 | orchestrator | Thursday 26 March 2026 02:39:20 +0000 (0:00:00.614) 0:00:29.315 ******** 2026-03-26 02:40:50.152148 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:40:50.152166 | orchestrator | 2026-03-26 02:40:50.152185 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-26 02:40:50.152203 | orchestrator | Thursday 26 March 2026 02:39:20 +0000 (0:00:00.245) 0:00:29.560 ******** 2026-03-26 02:40:50.152221 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:40:50.152240 | orchestrator | 2026-03-26 02:40:50.152257 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-26 02:40:50.152319 | orchestrator | Thursday 26 March 2026 02:39:27 +0000 (0:00:06.599) 0:00:36.160 ******** 2026-03-26 02:40:50.152341 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:40:50.152362 | orchestrator | 2026-03-26 02:40:50.152380 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-26 02:40:50.152399 | orchestrator | 2026-03-26 02:40:50.152411 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-26 02:40:50.152425 | orchestrator | Thursday 26 March 2026 02:40:14 +0000 (0:00:46.534) 0:01:22.695 ******** 2026-03-26 02:40:50.152437 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:40:50.152449 | orchestrator | 2026-03-26 02:40:50.152462 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-26 02:40:50.152475 | orchestrator | Thursday 26 March 2026 02:40:14 +0000 (0:00:00.586) 0:01:23.282 ******** 2026-03-26 02:40:50.152488 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:40:50.152501 | orchestrator | 2026-03-26 02:40:50.152513 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-26 02:40:50.152526 | orchestrator | Thursday 26 March 2026 02:40:14 +0000 (0:00:00.254) 0:01:23.536 ******** 2026-03-26 02:40:50.152538 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:40:50.152551 | orchestrator | 2026-03-26 02:40:50.152568 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-26 02:40:50.152634 | orchestrator | Thursday 26 March 2026 02:40:16 +0000 (0:00:01.691) 0:01:25.228 ******** 2026-03-26 02:40:50.152656 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:40:50.152673 | orchestrator | 2026-03-26 02:40:50.152691 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-26 02:40:50.152709 | orchestrator | 2026-03-26 02:40:50.152726 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-26 02:40:50.152744 | orchestrator | Thursday 26 March 2026 02:40:30 +0000 (0:00:13.585) 0:01:38.813 ******** 2026-03-26 02:40:50.152762 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:40:50.152780 | orchestrator | 2026-03-26 02:40:50.152823 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-26 02:40:50.152839 | orchestrator | Thursday 26 March 2026 02:40:30 +0000 (0:00:00.747) 0:01:39.561 ******** 2026-03-26 02:40:50.152855 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:40:50.152870 | orchestrator | 2026-03-26 02:40:50.152886 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-26 02:40:50.152902 | orchestrator | Thursday 26 March 2026 02:40:31 +0000 (0:00:00.235) 0:01:39.796 ******** 2026-03-26 02:40:50.152917 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:40:50.152934 | orchestrator | 2026-03-26 02:40:50.152951 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-26 02:40:50.152968 | orchestrator | Thursday 26 March 2026 02:40:32 +0000 (0:00:01.470) 0:01:41.266 ******** 2026-03-26 02:40:50.152984 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:40:50.153000 | orchestrator | 2026-03-26 02:40:50.153018 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-03-26 02:40:50.153038 | orchestrator | 2026-03-26 02:40:50.153058 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-03-26 02:40:50.153076 | orchestrator | Thursday 26 March 2026 02:40:46 +0000 (0:00:14.155) 0:01:55.422 ******** 2026-03-26 02:40:50.153092 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 02:40:50.153103 | orchestrator | 2026-03-26 02:40:50.153114 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-03-26 02:40:50.153125 | orchestrator | Thursday 26 March 2026 02:40:47 +0000 (0:00:00.608) 0:01:56.030 ******** 2026-03-26 02:40:50.153136 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-03-26 02:40:50.153147 | orchestrator | enable_outward_rabbitmq_True 2026-03-26 02:40:50.153158 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-03-26 02:40:50.153168 | orchestrator | outward_rabbitmq_restart 2026-03-26 02:40:50.153179 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:40:50.153190 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:40:50.153201 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:40:50.153212 | orchestrator | 2026-03-26 02:40:50.153223 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2026-03-26 02:40:50.153234 | orchestrator | skipping: no hosts matched 2026-03-26 02:40:50.153245 | orchestrator | 2026-03-26 02:40:50.153256 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2026-03-26 02:40:50.153273 | orchestrator | skipping: no hosts matched 2026-03-26 02:40:50.153291 | orchestrator | 2026-03-26 02:40:50.153309 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2026-03-26 02:40:50.153327 | orchestrator | skipping: no hosts matched 2026-03-26 02:40:50.153344 | orchestrator | 2026-03-26 02:40:50.153363 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-26 02:40:50.153411 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-03-26 02:40:50.153433 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-26 02:40:50.153452 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-26 02:40:50.153472 | orchestrator | 2026-03-26 02:40:50.153490 | orchestrator | 2026-03-26 02:40:50.153506 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-26 02:40:50.153517 | orchestrator | Thursday 26 March 2026 02:40:49 +0000 (0:00:02.403) 0:01:58.434 ******** 2026-03-26 02:40:50.153528 | orchestrator | =============================================================================== 2026-03-26 02:40:50.153539 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 74.28s 2026-03-26 02:40:50.153550 | orchestrator | rabbitmq : Restart rabbitmq container ----------------------------------- 9.76s 2026-03-26 02:40:50.153574 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 7.28s 2026-03-26 02:40:50.153585 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.40s 2026-03-26 02:40:50.153596 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 1.95s 2026-03-26 02:40:50.153607 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 1.76s 2026-03-26 02:40:50.153648 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 1.71s 2026-03-26 02:40:50.153659 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 1.57s 2026-03-26 02:40:50.153670 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.45s 2026-03-26 02:40:50.153681 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.43s 2026-03-26 02:40:50.153692 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.41s 2026-03-26 02:40:50.153703 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 1.32s 2026-03-26 02:40:50.153714 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.18s 2026-03-26 02:40:50.153725 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 0.96s 2026-03-26 02:40:50.153753 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.92s 2026-03-26 02:40:50.153765 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 0.88s 2026-03-26 02:40:50.153776 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 0.83s 2026-03-26 02:40:50.153787 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.78s 2026-03-26 02:40:50.153798 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode ---------------------- 0.74s 2026-03-26 02:40:50.153809 | orchestrator | rabbitmq : Catch when RabbitMQ is being downgraded ---------------------- 0.61s 2026-03-26 02:40:52.719153 | orchestrator | 2026-03-26 02:40:52 | INFO  | Task 9d84b916-885e-4b98-a059-63c4a30b9927 (openvswitch) was prepared for execution. 2026-03-26 02:40:52.719229 | orchestrator | 2026-03-26 02:40:52 | INFO  | It takes a moment until task 9d84b916-885e-4b98-a059-63c4a30b9927 (openvswitch) has been started and output is visible here. 2026-03-26 02:41:05.809678 | orchestrator | 2026-03-26 02:41:05.809766 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-26 02:41:05.809773 | orchestrator | 2026-03-26 02:41:05.809778 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-26 02:41:05.809783 | orchestrator | Thursday 26 March 2026 02:40:57 +0000 (0:00:00.268) 0:00:00.268 ******** 2026-03-26 02:41:05.809787 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:41:05.809792 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:41:05.809796 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:41:05.809800 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:41:05.809804 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:41:05.809808 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:41:05.809811 | orchestrator | 2026-03-26 02:41:05.809815 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-26 02:41:05.809819 | orchestrator | Thursday 26 March 2026 02:40:57 +0000 (0:00:00.714) 0:00:00.983 ******** 2026-03-26 02:41:05.809823 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-26 02:41:05.809828 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-26 02:41:05.809831 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-26 02:41:05.809835 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-26 02:41:05.809839 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-26 02:41:05.809843 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-26 02:41:05.809847 | orchestrator | 2026-03-26 02:41:05.809867 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-03-26 02:41:05.809871 | orchestrator | 2026-03-26 02:41:05.809876 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-03-26 02:41:05.809880 | orchestrator | Thursday 26 March 2026 02:40:58 +0000 (0:00:00.638) 0:00:01.621 ******** 2026-03-26 02:41:05.809884 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-26 02:41:05.809890 | orchestrator | 2026-03-26 02:41:05.809893 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-26 02:41:05.809897 | orchestrator | Thursday 26 March 2026 02:40:59 +0000 (0:00:01.168) 0:00:02.790 ******** 2026-03-26 02:41:05.809901 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-03-26 02:41:05.809906 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-03-26 02:41:05.809909 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-03-26 02:41:05.809913 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-03-26 02:41:05.809917 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-03-26 02:41:05.809921 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-03-26 02:41:05.809924 | orchestrator | 2026-03-26 02:41:05.809928 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-26 02:41:05.809932 | orchestrator | Thursday 26 March 2026 02:41:00 +0000 (0:00:01.219) 0:00:04.010 ******** 2026-03-26 02:41:05.809936 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-03-26 02:41:05.809940 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-03-26 02:41:05.809943 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-03-26 02:41:05.809947 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-03-26 02:41:05.809951 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-03-26 02:41:05.809955 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-03-26 02:41:05.809959 | orchestrator | 2026-03-26 02:41:05.809962 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-26 02:41:05.809966 | orchestrator | Thursday 26 March 2026 02:41:02 +0000 (0:00:01.466) 0:00:05.477 ******** 2026-03-26 02:41:05.809970 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-03-26 02:41:05.809973 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:41:05.809979 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-03-26 02:41:05.809982 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:41:05.809986 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-03-26 02:41:05.809990 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:41:05.809994 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-03-26 02:41:05.809997 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:41:05.810001 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-03-26 02:41:05.810005 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:41:05.810008 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-03-26 02:41:05.810042 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:41:05.810047 | orchestrator | 2026-03-26 02:41:05.810051 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-03-26 02:41:05.810055 | orchestrator | Thursday 26 March 2026 02:41:03 +0000 (0:00:01.217) 0:00:06.694 ******** 2026-03-26 02:41:05.810059 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:41:05.810063 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:41:05.810067 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:41:05.810071 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:41:05.810075 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:41:05.810079 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:41:05.810082 | orchestrator | 2026-03-26 02:41:05.810086 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-03-26 02:41:05.810094 | orchestrator | Thursday 26 March 2026 02:41:04 +0000 (0:00:00.761) 0:00:07.456 ******** 2026-03-26 02:41:05.810111 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-26 02:41:05.810121 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-26 02:41:05.810125 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-26 02:41:05.810156 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-26 02:41:05.810163 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-26 02:41:05.810172 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-26 02:41:08.015012 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-26 02:41:08.015097 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-26 02:41:08.015108 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-26 02:41:08.015116 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-26 02:41:08.015138 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-26 02:41:08.015175 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-26 02:41:08.015184 | orchestrator | 2026-03-26 02:41:08.015193 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-03-26 02:41:08.015216 | orchestrator | Thursday 26 March 2026 02:41:05 +0000 (0:00:01.473) 0:00:08.929 ******** 2026-03-26 02:41:08.015223 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-26 02:41:08.015231 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-26 02:41:08.015238 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-26 02:41:08.015245 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-26 02:41:08.015262 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-26 02:41:08.015275 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-26 02:41:10.609175 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-26 02:41:10.609272 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-26 02:41:10.609283 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-26 02:41:10.609307 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-26 02:41:10.609332 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-26 02:41:10.609354 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-26 02:41:10.609362 | orchestrator | 2026-03-26 02:41:10.609371 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-03-26 02:41:10.609380 | orchestrator | Thursday 26 March 2026 02:41:08 +0000 (0:00:02.203) 0:00:11.133 ******** 2026-03-26 02:41:10.609387 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:41:10.609395 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:41:10.609402 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:41:10.609409 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:41:10.609416 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:41:10.609423 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:41:10.609430 | orchestrator | 2026-03-26 02:41:10.609438 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2026-03-26 02:41:10.609445 | orchestrator | Thursday 26 March 2026 02:41:09 +0000 (0:00:00.982) 0:00:12.115 ******** 2026-03-26 02:41:10.609452 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-26 02:41:10.609460 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-26 02:41:10.609477 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-26 02:41:10.609486 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-26 02:41:10.609499 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-26 02:41:36.279123 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-26 02:41:36.279206 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-26 02:41:36.279219 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-26 02:41:36.279252 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-26 02:41:36.279260 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-26 02:41:36.279278 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-26 02:41:36.279285 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-26 02:41:36.279292 | orchestrator | 2026-03-26 02:41:36.279299 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-26 02:41:36.279308 | orchestrator | Thursday 26 March 2026 02:41:10 +0000 (0:00:01.614) 0:00:13.729 ******** 2026-03-26 02:41:36.279313 | orchestrator | 2026-03-26 02:41:36.279320 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-26 02:41:36.279325 | orchestrator | Thursday 26 March 2026 02:41:11 +0000 (0:00:00.330) 0:00:14.060 ******** 2026-03-26 02:41:36.279338 | orchestrator | 2026-03-26 02:41:36.279344 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-26 02:41:36.279350 | orchestrator | Thursday 26 March 2026 02:41:11 +0000 (0:00:00.136) 0:00:14.196 ******** 2026-03-26 02:41:36.279356 | orchestrator | 2026-03-26 02:41:36.279361 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-26 02:41:36.279367 | orchestrator | Thursday 26 March 2026 02:41:11 +0000 (0:00:00.144) 0:00:14.340 ******** 2026-03-26 02:41:36.279373 | orchestrator | 2026-03-26 02:41:36.279379 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-26 02:41:36.279385 | orchestrator | Thursday 26 March 2026 02:41:11 +0000 (0:00:00.153) 0:00:14.494 ******** 2026-03-26 02:41:36.279390 | orchestrator | 2026-03-26 02:41:36.279396 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-26 02:41:36.279402 | orchestrator | Thursday 26 March 2026 02:41:11 +0000 (0:00:00.148) 0:00:14.642 ******** 2026-03-26 02:41:36.279408 | orchestrator | 2026-03-26 02:41:36.279414 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-03-26 02:41:36.279419 | orchestrator | Thursday 26 March 2026 02:41:11 +0000 (0:00:00.133) 0:00:14.776 ******** 2026-03-26 02:41:36.279426 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:41:36.279433 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:41:36.279438 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:41:36.279444 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:41:36.279450 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:41:36.279456 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:41:36.279462 | orchestrator | 2026-03-26 02:41:36.279468 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-03-26 02:41:36.279474 | orchestrator | Thursday 26 March 2026 02:41:20 +0000 (0:00:08.407) 0:00:23.183 ******** 2026-03-26 02:41:36.279485 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:41:36.279492 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:41:36.279498 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:41:36.279505 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:41:36.279511 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:41:36.279518 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:41:36.279524 | orchestrator | 2026-03-26 02:41:36.279530 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-03-26 02:41:36.279537 | orchestrator | Thursday 26 March 2026 02:41:21 +0000 (0:00:01.784) 0:00:24.968 ******** 2026-03-26 02:41:36.279543 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:41:36.279550 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:41:36.279557 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:41:36.279563 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:41:36.279569 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:41:36.279575 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:41:36.279582 | orchestrator | 2026-03-26 02:41:36.279588 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-03-26 02:41:36.279595 | orchestrator | Thursday 26 March 2026 02:41:29 +0000 (0:00:08.005) 0:00:32.973 ******** 2026-03-26 02:41:36.279602 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-03-26 02:41:36.279609 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-03-26 02:41:36.279615 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-03-26 02:41:36.279621 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-03-26 02:41:36.279627 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-03-26 02:41:36.279634 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-03-26 02:41:36.279641 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-03-26 02:41:36.279661 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-03-26 02:41:49.252065 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-03-26 02:41:49.252130 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-03-26 02:41:49.252137 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-03-26 02:41:49.252142 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-03-26 02:41:49.252147 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-26 02:41:49.252151 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-26 02:41:49.252155 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-26 02:41:49.252159 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-26 02:41:49.252163 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-26 02:41:49.252166 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-26 02:41:49.252170 | orchestrator | 2026-03-26 02:41:49.252175 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-03-26 02:41:49.252181 | orchestrator | Thursday 26 March 2026 02:41:36 +0000 (0:00:06.330) 0:00:39.304 ******** 2026-03-26 02:41:49.252186 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-03-26 02:41:49.252190 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:41:49.252195 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-03-26 02:41:49.252199 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:41:49.252203 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-03-26 02:41:49.252207 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:41:49.252211 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2026-03-26 02:41:49.252215 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2026-03-26 02:41:49.252219 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2026-03-26 02:41:49.252223 | orchestrator | 2026-03-26 02:41:49.252227 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-03-26 02:41:49.252230 | orchestrator | Thursday 26 March 2026 02:41:38 +0000 (0:00:02.424) 0:00:41.728 ******** 2026-03-26 02:41:49.252234 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-03-26 02:41:49.252238 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:41:49.252242 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-03-26 02:41:49.252246 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:41:49.252249 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-03-26 02:41:49.252253 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:41:49.252257 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-03-26 02:41:49.252261 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-03-26 02:41:49.252271 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-03-26 02:41:49.252275 | orchestrator | 2026-03-26 02:41:49.252279 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-03-26 02:41:49.252283 | orchestrator | Thursday 26 March 2026 02:41:41 +0000 (0:00:03.021) 0:00:44.750 ******** 2026-03-26 02:41:49.252287 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:41:49.252291 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:41:49.252309 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:41:49.252313 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:41:49.252317 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:41:49.252320 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:41:49.252324 | orchestrator | 2026-03-26 02:41:49.252328 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-26 02:41:49.252333 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-26 02:41:49.252338 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-26 02:41:49.252342 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-26 02:41:49.252346 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-26 02:41:49.252350 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-26 02:41:49.252353 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-26 02:41:49.252357 | orchestrator | 2026-03-26 02:41:49.252361 | orchestrator | 2026-03-26 02:41:49.252365 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-26 02:41:49.252368 | orchestrator | Thursday 26 March 2026 02:41:48 +0000 (0:00:07.104) 0:00:51.855 ******** 2026-03-26 02:41:49.252382 | orchestrator | =============================================================================== 2026-03-26 02:41:49.252386 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 15.11s 2026-03-26 02:41:49.252390 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------- 8.41s 2026-03-26 02:41:49.252394 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 6.33s 2026-03-26 02:41:49.252397 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.02s 2026-03-26 02:41:49.252401 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.42s 2026-03-26 02:41:49.252405 | orchestrator | openvswitch : Copying over config.json files for services --------------- 2.20s 2026-03-26 02:41:49.252409 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.78s 2026-03-26 02:41:49.252412 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 1.61s 2026-03-26 02:41:49.252416 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 1.47s 2026-03-26 02:41:49.252420 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 1.47s 2026-03-26 02:41:49.252424 | orchestrator | module-load : Load modules ---------------------------------------------- 1.22s 2026-03-26 02:41:49.252428 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.22s 2026-03-26 02:41:49.252432 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.17s 2026-03-26 02:41:49.252435 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.05s 2026-03-26 02:41:49.252439 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 0.98s 2026-03-26 02:41:49.252443 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.76s 2026-03-26 02:41:49.252447 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.72s 2026-03-26 02:41:49.252450 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.64s 2026-03-26 02:41:51.864197 | orchestrator | 2026-03-26 02:41:51 | INFO  | Task 13a853e3-ac1d-41a3-984d-fc24ff8bacb1 (ovn) was prepared for execution. 2026-03-26 02:41:51.864267 | orchestrator | 2026-03-26 02:41:51 | INFO  | It takes a moment until task 13a853e3-ac1d-41a3-984d-fc24ff8bacb1 (ovn) has been started and output is visible here. 2026-03-26 02:42:02.963641 | orchestrator | 2026-03-26 02:42:02.963854 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-26 02:42:02.963887 | orchestrator | 2026-03-26 02:42:02.963907 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-26 02:42:02.963925 | orchestrator | Thursday 26 March 2026 02:41:56 +0000 (0:00:00.203) 0:00:00.203 ******** 2026-03-26 02:42:02.963943 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:42:02.963962 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:42:02.963980 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:42:02.963998 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:42:02.964016 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:42:02.964034 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:42:02.964051 | orchestrator | 2026-03-26 02:42:02.964069 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-26 02:42:02.964088 | orchestrator | Thursday 26 March 2026 02:41:57 +0000 (0:00:00.774) 0:00:00.978 ******** 2026-03-26 02:42:02.964127 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-03-26 02:42:02.964146 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-03-26 02:42:02.964166 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-03-26 02:42:02.964186 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-03-26 02:42:02.964207 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-03-26 02:42:02.964227 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-03-26 02:42:02.964247 | orchestrator | 2026-03-26 02:42:02.964268 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-03-26 02:42:02.964290 | orchestrator | 2026-03-26 02:42:02.964308 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-03-26 02:42:02.964327 | orchestrator | Thursday 26 March 2026 02:41:57 +0000 (0:00:00.867) 0:00:01.845 ******** 2026-03-26 02:42:02.964346 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 02:42:02.964366 | orchestrator | 2026-03-26 02:42:02.964384 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-03-26 02:42:02.964403 | orchestrator | Thursday 26 March 2026 02:41:59 +0000 (0:00:01.142) 0:00:02.988 ******** 2026-03-26 02:42:02.964424 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:42:02.964445 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:42:02.964465 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:42:02.964484 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:42:02.964530 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:42:02.964574 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:42:02.964594 | orchestrator | 2026-03-26 02:42:02.964611 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-03-26 02:42:02.964628 | orchestrator | Thursday 26 March 2026 02:42:00 +0000 (0:00:01.188) 0:00:04.177 ******** 2026-03-26 02:42:02.964653 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:42:02.964670 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:42:02.964687 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:42:02.964703 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:42:02.964719 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:42:02.964736 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:42:02.964837 | orchestrator | 2026-03-26 02:42:02.964855 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-03-26 02:42:02.964874 | orchestrator | Thursday 26 March 2026 02:42:01 +0000 (0:00:01.471) 0:00:05.648 ******** 2026-03-26 02:42:02.964893 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:42:02.964911 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:42:02.964945 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:42:27.213611 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:42:27.213700 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:42:27.213707 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:42:27.213713 | orchestrator | 2026-03-26 02:42:27.213718 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-03-26 02:42:27.213725 | orchestrator | Thursday 26 March 2026 02:42:02 +0000 (0:00:01.271) 0:00:06.920 ******** 2026-03-26 02:42:27.213729 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:42:27.213734 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:42:27.213754 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:42:27.213759 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:42:27.213764 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:42:27.213779 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:42:27.213829 | orchestrator | 2026-03-26 02:42:27.213835 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2026-03-26 02:42:27.213839 | orchestrator | Thursday 26 March 2026 02:42:04 +0000 (0:00:01.634) 0:00:08.554 ******** 2026-03-26 02:42:27.213848 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:42:27.213853 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:42:27.213858 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:42:27.213862 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:42:27.213872 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:42:27.213876 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:42:27.213881 | orchestrator | 2026-03-26 02:42:27.213886 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-03-26 02:42:27.213890 | orchestrator | Thursday 26 March 2026 02:42:05 +0000 (0:00:01.357) 0:00:09.912 ******** 2026-03-26 02:42:27.213895 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:42:27.213901 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:42:27.213906 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:42:27.213910 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:42:27.213915 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:42:27.213919 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:42:27.213924 | orchestrator | 2026-03-26 02:42:27.213928 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-03-26 02:42:27.213933 | orchestrator | Thursday 26 March 2026 02:42:08 +0000 (0:00:02.333) 0:00:12.246 ******** 2026-03-26 02:42:27.213937 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-03-26 02:42:27.213943 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-03-26 02:42:27.213948 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-03-26 02:42:27.213952 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-03-26 02:42:27.213956 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-03-26 02:42:27.213961 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-03-26 02:42:27.213969 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-26 02:43:03.929211 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-26 02:43:03.929325 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-26 02:43:03.929348 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-26 02:43:03.929359 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-26 02:43:03.929369 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-26 02:43:03.929380 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-26 02:43:03.929392 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-26 02:43:03.929419 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-26 02:43:03.929430 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-26 02:43:03.929440 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-26 02:43:03.929450 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-26 02:43:03.929460 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-26 02:43:03.929471 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-26 02:43:03.929481 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-26 02:43:03.929491 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-26 02:43:03.929501 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-26 02:43:03.929512 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-26 02:43:03.929522 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-26 02:43:03.929532 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-26 02:43:03.929541 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-26 02:43:03.929551 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-26 02:43:03.929561 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-26 02:43:03.929571 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-26 02:43:03.929581 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-26 02:43:03.929591 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-26 02:43:03.929601 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-26 02:43:03.929611 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-26 02:43:03.929621 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-26 02:43:03.929630 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-26 02:43:03.929641 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-26 02:43:03.929650 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-26 02:43:03.929661 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-26 02:43:03.929671 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-26 02:43:03.929681 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-26 02:43:03.929691 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-03-26 02:43:03.929703 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-26 02:43:03.929733 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-03-26 02:43:03.929744 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-03-26 02:43:03.929758 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-03-26 02:43:03.929771 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-03-26 02:43:03.929783 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-26 02:43:03.929795 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-03-26 02:43:03.929806 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-26 02:43:03.929818 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-26 02:43:03.929829 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-26 02:43:03.929841 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-26 02:43:03.929882 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-26 02:43:03.929894 | orchestrator | 2026-03-26 02:43:03.929906 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-26 02:43:03.929918 | orchestrator | Thursday 26 March 2026 02:42:26 +0000 (0:00:18.232) 0:00:30.479 ******** 2026-03-26 02:43:03.929929 | orchestrator | 2026-03-26 02:43:03.929941 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-26 02:43:03.929953 | orchestrator | Thursday 26 March 2026 02:42:26 +0000 (0:00:00.260) 0:00:30.739 ******** 2026-03-26 02:43:03.929964 | orchestrator | 2026-03-26 02:43:03.929976 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-26 02:43:03.929987 | orchestrator | Thursday 26 March 2026 02:42:26 +0000 (0:00:00.105) 0:00:30.844 ******** 2026-03-26 02:43:03.929998 | orchestrator | 2026-03-26 02:43:03.930010 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-26 02:43:03.930108 | orchestrator | Thursday 26 March 2026 02:42:26 +0000 (0:00:00.098) 0:00:30.943 ******** 2026-03-26 02:43:03.930126 | orchestrator | 2026-03-26 02:43:03.930142 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-26 02:43:03.930152 | orchestrator | Thursday 26 March 2026 02:42:27 +0000 (0:00:00.071) 0:00:31.014 ******** 2026-03-26 02:43:03.930162 | orchestrator | 2026-03-26 02:43:03.930172 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-26 02:43:03.930182 | orchestrator | Thursday 26 March 2026 02:42:27 +0000 (0:00:00.073) 0:00:31.087 ******** 2026-03-26 02:43:03.930192 | orchestrator | 2026-03-26 02:43:03.930201 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2026-03-26 02:43:03.930212 | orchestrator | Thursday 26 March 2026 02:42:27 +0000 (0:00:00.073) 0:00:31.160 ******** 2026-03-26 02:43:03.930222 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:43:03.930232 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:43:03.930242 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:43:03.930251 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:43:03.930261 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:43:03.930271 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:43:03.930280 | orchestrator | 2026-03-26 02:43:03.930290 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-03-26 02:43:03.930300 | orchestrator | Thursday 26 March 2026 02:42:28 +0000 (0:00:01.581) 0:00:32.741 ******** 2026-03-26 02:43:03.930322 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:43:03.930332 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:43:03.930341 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:43:03.930351 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:43:03.930361 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:43:03.930370 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:43:03.930380 | orchestrator | 2026-03-26 02:43:03.930390 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-03-26 02:43:03.930399 | orchestrator | 2026-03-26 02:43:03.930409 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-26 02:43:03.930419 | orchestrator | Thursday 26 March 2026 02:43:01 +0000 (0:00:32.834) 0:01:05.575 ******** 2026-03-26 02:43:03.930429 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 02:43:03.930438 | orchestrator | 2026-03-26 02:43:03.930448 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-26 02:43:03.930458 | orchestrator | Thursday 26 March 2026 02:43:02 +0000 (0:00:00.730) 0:01:06.306 ******** 2026-03-26 02:43:03.930468 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 02:43:03.930478 | orchestrator | 2026-03-26 02:43:03.930487 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-03-26 02:43:03.930497 | orchestrator | Thursday 26 March 2026 02:43:02 +0000 (0:00:00.534) 0:01:06.840 ******** 2026-03-26 02:43:03.930507 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:43:03.930517 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:43:03.930526 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:43:03.930536 | orchestrator | 2026-03-26 02:43:03.930546 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-03-26 02:43:03.930564 | orchestrator | Thursday 26 March 2026 02:43:03 +0000 (0:00:01.034) 0:01:07.875 ******** 2026-03-26 02:43:15.360935 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:43:15.361051 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:43:15.361066 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:43:15.361078 | orchestrator | 2026-03-26 02:43:15.361091 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-03-26 02:43:15.361120 | orchestrator | Thursday 26 March 2026 02:43:04 +0000 (0:00:00.338) 0:01:08.214 ******** 2026-03-26 02:43:15.361132 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:43:15.361143 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:43:15.361154 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:43:15.361165 | orchestrator | 2026-03-26 02:43:15.361176 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-03-26 02:43:15.361188 | orchestrator | Thursday 26 March 2026 02:43:04 +0000 (0:00:00.339) 0:01:08.553 ******** 2026-03-26 02:43:15.361199 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:43:15.361210 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:43:15.361220 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:43:15.361231 | orchestrator | 2026-03-26 02:43:15.361242 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-03-26 02:43:15.361253 | orchestrator | Thursday 26 March 2026 02:43:04 +0000 (0:00:00.333) 0:01:08.887 ******** 2026-03-26 02:43:15.361264 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:43:15.361275 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:43:15.361286 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:43:15.361297 | orchestrator | 2026-03-26 02:43:15.361308 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-03-26 02:43:15.361319 | orchestrator | Thursday 26 March 2026 02:43:05 +0000 (0:00:00.568) 0:01:09.456 ******** 2026-03-26 02:43:15.361330 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:43:15.361342 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:43:15.361353 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:43:15.361364 | orchestrator | 2026-03-26 02:43:15.361375 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-03-26 02:43:15.361410 | orchestrator | Thursday 26 March 2026 02:43:05 +0000 (0:00:00.322) 0:01:09.778 ******** 2026-03-26 02:43:15.361424 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:43:15.361441 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:43:15.361460 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:43:15.361479 | orchestrator | 2026-03-26 02:43:15.361497 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-03-26 02:43:15.361517 | orchestrator | Thursday 26 March 2026 02:43:06 +0000 (0:00:00.343) 0:01:10.122 ******** 2026-03-26 02:43:15.361538 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:43:15.361556 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:43:15.361567 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:43:15.361578 | orchestrator | 2026-03-26 02:43:15.361589 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-03-26 02:43:15.361600 | orchestrator | Thursday 26 March 2026 02:43:06 +0000 (0:00:00.338) 0:01:10.461 ******** 2026-03-26 02:43:15.361611 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:43:15.361622 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:43:15.361633 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:43:15.361644 | orchestrator | 2026-03-26 02:43:15.361655 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-03-26 02:43:15.361666 | orchestrator | Thursday 26 March 2026 02:43:06 +0000 (0:00:00.308) 0:01:10.769 ******** 2026-03-26 02:43:15.361676 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:43:15.361688 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:43:15.361699 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:43:15.361709 | orchestrator | 2026-03-26 02:43:15.361720 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-03-26 02:43:15.361731 | orchestrator | Thursday 26 March 2026 02:43:07 +0000 (0:00:00.547) 0:01:11.316 ******** 2026-03-26 02:43:15.361742 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:43:15.361753 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:43:15.361764 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:43:15.361774 | orchestrator | 2026-03-26 02:43:15.361785 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-03-26 02:43:15.361796 | orchestrator | Thursday 26 March 2026 02:43:07 +0000 (0:00:00.303) 0:01:11.620 ******** 2026-03-26 02:43:15.361807 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:43:15.361818 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:43:15.361829 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:43:15.361839 | orchestrator | 2026-03-26 02:43:15.361850 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-03-26 02:43:15.361931 | orchestrator | Thursday 26 March 2026 02:43:07 +0000 (0:00:00.309) 0:01:11.930 ******** 2026-03-26 02:43:15.361947 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:43:15.361958 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:43:15.361979 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:43:15.361991 | orchestrator | 2026-03-26 02:43:15.362002 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-03-26 02:43:15.362067 | orchestrator | Thursday 26 March 2026 02:43:08 +0000 (0:00:00.292) 0:01:12.222 ******** 2026-03-26 02:43:15.362080 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:43:15.362091 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:43:15.362101 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:43:15.362112 | orchestrator | 2026-03-26 02:43:15.362123 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-03-26 02:43:15.362139 | orchestrator | Thursday 26 March 2026 02:43:08 +0000 (0:00:00.506) 0:01:12.729 ******** 2026-03-26 02:43:15.362158 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:43:15.362176 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:43:15.362196 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:43:15.362215 | orchestrator | 2026-03-26 02:43:15.362233 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-03-26 02:43:15.362266 | orchestrator | Thursday 26 March 2026 02:43:09 +0000 (0:00:00.309) 0:01:13.039 ******** 2026-03-26 02:43:15.362283 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:43:15.362299 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:43:15.362317 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:43:15.362335 | orchestrator | 2026-03-26 02:43:15.362353 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-03-26 02:43:15.362372 | orchestrator | Thursday 26 March 2026 02:43:09 +0000 (0:00:00.303) 0:01:13.342 ******** 2026-03-26 02:43:15.362417 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:43:15.362436 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:43:15.362452 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:43:15.362463 | orchestrator | 2026-03-26 02:43:15.362474 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-26 02:43:15.362494 | orchestrator | Thursday 26 March 2026 02:43:09 +0000 (0:00:00.305) 0:01:13.648 ******** 2026-03-26 02:43:15.362515 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 02:43:15.362534 | orchestrator | 2026-03-26 02:43:15.362552 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2026-03-26 02:43:15.362573 | orchestrator | Thursday 26 March 2026 02:43:10 +0000 (0:00:00.763) 0:01:14.411 ******** 2026-03-26 02:43:15.362592 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:43:15.362610 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:43:15.362629 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:43:15.362640 | orchestrator | 2026-03-26 02:43:15.362651 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2026-03-26 02:43:15.362663 | orchestrator | Thursday 26 March 2026 02:43:10 +0000 (0:00:00.442) 0:01:14.854 ******** 2026-03-26 02:43:15.362674 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:43:15.362685 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:43:15.362695 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:43:15.362706 | orchestrator | 2026-03-26 02:43:15.362717 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2026-03-26 02:43:15.362728 | orchestrator | Thursday 26 March 2026 02:43:11 +0000 (0:00:00.423) 0:01:15.278 ******** 2026-03-26 02:43:15.362739 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:43:15.362750 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:43:15.362761 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:43:15.362772 | orchestrator | 2026-03-26 02:43:15.362783 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2026-03-26 02:43:15.362794 | orchestrator | Thursday 26 March 2026 02:43:11 +0000 (0:00:00.334) 0:01:15.613 ******** 2026-03-26 02:43:15.362805 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:43:15.362816 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:43:15.362827 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:43:15.362838 | orchestrator | 2026-03-26 02:43:15.362849 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2026-03-26 02:43:15.362860 | orchestrator | Thursday 26 March 2026 02:43:12 +0000 (0:00:00.602) 0:01:16.216 ******** 2026-03-26 02:43:15.362903 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:43:15.362915 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:43:15.362925 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:43:15.362936 | orchestrator | 2026-03-26 02:43:15.362947 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2026-03-26 02:43:15.362958 | orchestrator | Thursday 26 March 2026 02:43:12 +0000 (0:00:00.372) 0:01:16.588 ******** 2026-03-26 02:43:15.362969 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:43:15.362980 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:43:15.362991 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:43:15.363002 | orchestrator | 2026-03-26 02:43:15.363013 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2026-03-26 02:43:15.363024 | orchestrator | Thursday 26 March 2026 02:43:12 +0000 (0:00:00.349) 0:01:16.938 ******** 2026-03-26 02:43:15.363048 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:43:15.363059 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:43:15.363070 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:43:15.363081 | orchestrator | 2026-03-26 02:43:15.363092 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2026-03-26 02:43:15.363103 | orchestrator | Thursday 26 March 2026 02:43:13 +0000 (0:00:00.330) 0:01:17.269 ******** 2026-03-26 02:43:15.363114 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:43:15.363125 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:43:15.363136 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:43:15.363147 | orchestrator | 2026-03-26 02:43:15.363158 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-03-26 02:43:15.363169 | orchestrator | Thursday 26 March 2026 02:43:13 +0000 (0:00:00.569) 0:01:17.839 ******** 2026-03-26 02:43:15.363182 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:43:15.363196 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:43:15.363208 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:43:15.363248 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:43:21.785245 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:43:21.785354 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:43:21.785373 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:43:21.785385 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:43:21.785420 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:43:21.785434 | orchestrator | 2026-03-26 02:43:21.785447 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-03-26 02:43:21.785462 | orchestrator | Thursday 26 March 2026 02:43:15 +0000 (0:00:01.476) 0:01:19.315 ******** 2026-03-26 02:43:21.785476 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:43:21.785490 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:43:21.785504 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:43:21.785517 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:43:21.785567 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:43:21.785582 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:43:21.785596 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:43:21.785608 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:43:21.785630 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:43:21.785644 | orchestrator | 2026-03-26 02:43:21.785657 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-03-26 02:43:21.785670 | orchestrator | Thursday 26 March 2026 02:43:19 +0000 (0:00:03.899) 0:01:23.215 ******** 2026-03-26 02:43:21.785682 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:43:21.785695 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:43:21.785705 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:43:21.785713 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:43:21.785721 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:43:21.785742 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:43:46.826279 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:43:46.826420 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:43:46.826434 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:43:46.826443 | orchestrator | 2026-03-26 02:43:46.826453 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-26 02:43:46.826463 | orchestrator | Thursday 26 March 2026 02:43:21 +0000 (0:00:02.092) 0:01:25.307 ******** 2026-03-26 02:43:46.826471 | orchestrator | 2026-03-26 02:43:46.826479 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-26 02:43:46.826487 | orchestrator | Thursday 26 March 2026 02:43:21 +0000 (0:00:00.065) 0:01:25.373 ******** 2026-03-26 02:43:46.826495 | orchestrator | 2026-03-26 02:43:46.826503 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-26 02:43:46.826511 | orchestrator | Thursday 26 March 2026 02:43:21 +0000 (0:00:00.289) 0:01:25.662 ******** 2026-03-26 02:43:46.826520 | orchestrator | 2026-03-26 02:43:46.826528 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-03-26 02:43:46.826536 | orchestrator | Thursday 26 March 2026 02:43:21 +0000 (0:00:00.072) 0:01:25.735 ******** 2026-03-26 02:43:46.826544 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:43:46.826554 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:43:46.826562 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:43:46.826570 | orchestrator | 2026-03-26 02:43:46.826578 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-03-26 02:43:46.826586 | orchestrator | Thursday 26 March 2026 02:43:29 +0000 (0:00:07.747) 0:01:33.482 ******** 2026-03-26 02:43:46.826594 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:43:46.826603 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:43:46.826611 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:43:46.826619 | orchestrator | 2026-03-26 02:43:46.826627 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-03-26 02:43:46.826635 | orchestrator | Thursday 26 March 2026 02:43:32 +0000 (0:00:02.620) 0:01:36.103 ******** 2026-03-26 02:43:46.826643 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:43:46.826651 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:43:46.826660 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:43:46.826668 | orchestrator | 2026-03-26 02:43:46.826676 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-03-26 02:43:46.826684 | orchestrator | Thursday 26 March 2026 02:43:39 +0000 (0:00:07.646) 0:01:43.749 ******** 2026-03-26 02:43:46.826692 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:43:46.826700 | orchestrator | 2026-03-26 02:43:46.826708 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-03-26 02:43:46.826716 | orchestrator | Thursday 26 March 2026 02:43:39 +0000 (0:00:00.119) 0:01:43.869 ******** 2026-03-26 02:43:46.826724 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:43:46.826733 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:43:46.826742 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:43:46.826750 | orchestrator | 2026-03-26 02:43:46.826758 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-03-26 02:43:46.826767 | orchestrator | Thursday 26 March 2026 02:43:40 +0000 (0:00:01.058) 0:01:44.928 ******** 2026-03-26 02:43:46.826775 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:43:46.826789 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:43:46.826797 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:43:46.826806 | orchestrator | 2026-03-26 02:43:46.826814 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-03-26 02:43:46.826822 | orchestrator | Thursday 26 March 2026 02:43:41 +0000 (0:00:00.619) 0:01:45.547 ******** 2026-03-26 02:43:46.826830 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:43:46.826838 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:43:46.826846 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:43:46.826854 | orchestrator | 2026-03-26 02:43:46.826862 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-03-26 02:43:46.826882 | orchestrator | Thursday 26 March 2026 02:43:42 +0000 (0:00:00.798) 0:01:46.346 ******** 2026-03-26 02:43:46.826890 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:43:46.826898 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:43:46.826906 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:43:46.826951 | orchestrator | 2026-03-26 02:43:46.826960 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-03-26 02:43:46.826968 | orchestrator | Thursday 26 March 2026 02:43:43 +0000 (0:00:00.638) 0:01:46.984 ******** 2026-03-26 02:43:46.826976 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:43:46.826984 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:43:46.827008 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:43:46.827017 | orchestrator | 2026-03-26 02:43:46.827025 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-03-26 02:43:46.827033 | orchestrator | Thursday 26 March 2026 02:43:44 +0000 (0:00:01.219) 0:01:48.204 ******** 2026-03-26 02:43:46.827041 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:43:46.827049 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:43:46.827057 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:43:46.827065 | orchestrator | 2026-03-26 02:43:46.827073 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2026-03-26 02:43:46.827082 | orchestrator | Thursday 26 March 2026 02:43:45 +0000 (0:00:00.766) 0:01:48.971 ******** 2026-03-26 02:43:46.827090 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:43:46.827097 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:43:46.827105 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:43:46.827113 | orchestrator | 2026-03-26 02:43:46.827121 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-03-26 02:43:46.827130 | orchestrator | Thursday 26 March 2026 02:43:45 +0000 (0:00:00.328) 0:01:49.299 ******** 2026-03-26 02:43:46.827140 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:43:46.827151 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:43:46.827159 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:43:46.827167 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:43:46.827182 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:43:46.827190 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:43:46.827198 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:43:46.827211 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:43:46.827227 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:43:54.490092 | orchestrator | 2026-03-26 02:43:54.490243 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-03-26 02:43:54.490273 | orchestrator | Thursday 26 March 2026 02:43:46 +0000 (0:00:01.476) 0:01:50.776 ******** 2026-03-26 02:43:54.490297 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:43:54.490318 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:43:54.490335 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:43:54.490354 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:43:54.490407 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:43:54.490427 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:43:54.490446 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:43:54.490465 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:43:54.490504 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:43:54.490523 | orchestrator | 2026-03-26 02:43:54.490544 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-03-26 02:43:54.490563 | orchestrator | Thursday 26 March 2026 02:43:50 +0000 (0:00:04.001) 0:01:54.777 ******** 2026-03-26 02:43:54.490610 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:43:54.490632 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:43:54.490653 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:43:54.490674 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:43:54.490710 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:43:54.490731 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:43:54.490749 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:43:54.490769 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:43:54.490797 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 02:43:54.490817 | orchestrator | 2026-03-26 02:43:54.490836 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-26 02:43:54.490856 | orchestrator | Thursday 26 March 2026 02:43:54 +0000 (0:00:03.346) 0:01:58.124 ******** 2026-03-26 02:43:54.490875 | orchestrator | 2026-03-26 02:43:54.490893 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-26 02:43:54.490911 | orchestrator | Thursday 26 March 2026 02:43:54 +0000 (0:00:00.122) 0:01:58.246 ******** 2026-03-26 02:43:54.490965 | orchestrator | 2026-03-26 02:43:54.490986 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-26 02:43:54.491006 | orchestrator | Thursday 26 March 2026 02:43:54 +0000 (0:00:00.090) 0:01:58.337 ******** 2026-03-26 02:43:54.491025 | orchestrator | 2026-03-26 02:43:54.491060 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-03-26 02:44:19.190574 | orchestrator | Thursday 26 March 2026 02:43:54 +0000 (0:00:00.099) 0:01:58.436 ******** 2026-03-26 02:44:19.190680 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:44:19.190694 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:44:19.190703 | orchestrator | 2026-03-26 02:44:19.190712 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-03-26 02:44:19.190721 | orchestrator | Thursday 26 March 2026 02:44:00 +0000 (0:00:06.304) 0:02:04.740 ******** 2026-03-26 02:44:19.190729 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:44:19.190737 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:44:19.190746 | orchestrator | 2026-03-26 02:44:19.190754 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-03-26 02:44:19.190783 | orchestrator | Thursday 26 March 2026 02:44:07 +0000 (0:00:06.237) 0:02:10.978 ******** 2026-03-26 02:44:19.190792 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:44:19.190800 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:44:19.190808 | orchestrator | 2026-03-26 02:44:19.190816 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-03-26 02:44:19.190824 | orchestrator | Thursday 26 March 2026 02:44:13 +0000 (0:00:06.207) 0:02:17.186 ******** 2026-03-26 02:44:19.190832 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:44:19.190840 | orchestrator | 2026-03-26 02:44:19.190848 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-03-26 02:44:19.190857 | orchestrator | Thursday 26 March 2026 02:44:13 +0000 (0:00:00.171) 0:02:17.357 ******** 2026-03-26 02:44:19.190865 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:44:19.190874 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:44:19.190882 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:44:19.190890 | orchestrator | 2026-03-26 02:44:19.190898 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-03-26 02:44:19.190906 | orchestrator | Thursday 26 March 2026 02:44:14 +0000 (0:00:01.063) 0:02:18.421 ******** 2026-03-26 02:44:19.190914 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:44:19.190921 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:44:19.190929 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:44:19.190937 | orchestrator | 2026-03-26 02:44:19.190945 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-03-26 02:44:19.190953 | orchestrator | Thursday 26 March 2026 02:44:15 +0000 (0:00:00.674) 0:02:19.096 ******** 2026-03-26 02:44:19.190986 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:44:19.190996 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:44:19.191004 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:44:19.191012 | orchestrator | 2026-03-26 02:44:19.191020 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-03-26 02:44:19.191028 | orchestrator | Thursday 26 March 2026 02:44:15 +0000 (0:00:00.822) 0:02:19.919 ******** 2026-03-26 02:44:19.191036 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:44:19.191044 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:44:19.191052 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:44:19.191060 | orchestrator | 2026-03-26 02:44:19.191068 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-03-26 02:44:19.191076 | orchestrator | Thursday 26 March 2026 02:44:16 +0000 (0:00:00.641) 0:02:20.561 ******** 2026-03-26 02:44:19.191084 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:44:19.191092 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:44:19.191100 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:44:19.191108 | orchestrator | 2026-03-26 02:44:19.191116 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-03-26 02:44:19.191125 | orchestrator | Thursday 26 March 2026 02:44:17 +0000 (0:00:01.176) 0:02:21.737 ******** 2026-03-26 02:44:19.191135 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:44:19.191144 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:44:19.191153 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:44:19.191161 | orchestrator | 2026-03-26 02:44:19.191170 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-26 02:44:19.191180 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-03-26 02:44:19.191191 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-03-26 02:44:19.191201 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-03-26 02:44:19.191210 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-26 02:44:19.191226 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-26 02:44:19.191236 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-26 02:44:19.191245 | orchestrator | 2026-03-26 02:44:19.191254 | orchestrator | 2026-03-26 02:44:19.191276 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-26 02:44:19.191286 | orchestrator | Thursday 26 March 2026 02:44:18 +0000 (0:00:01.000) 0:02:22.738 ******** 2026-03-26 02:44:19.191295 | orchestrator | =============================================================================== 2026-03-26 02:44:19.191304 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 32.83s 2026-03-26 02:44:19.191313 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 18.23s 2026-03-26 02:44:19.191322 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 14.05s 2026-03-26 02:44:19.191331 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 13.85s 2026-03-26 02:44:19.191341 | orchestrator | ovn-db : Restart ovn-sb-db container ------------------------------------ 8.86s 2026-03-26 02:44:19.191364 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.00s 2026-03-26 02:44:19.191373 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.90s 2026-03-26 02:44:19.191383 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 3.35s 2026-03-26 02:44:19.191392 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.33s 2026-03-26 02:44:19.191401 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.09s 2026-03-26 02:44:19.191410 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.63s 2026-03-26 02:44:19.191418 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 1.58s 2026-03-26 02:44:19.191426 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.48s 2026-03-26 02:44:19.191434 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.48s 2026-03-26 02:44:19.191442 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.47s 2026-03-26 02:44:19.191450 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.36s 2026-03-26 02:44:19.191458 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.27s 2026-03-26 02:44:19.191466 | orchestrator | ovn-db : Wait for ovn-nb-db --------------------------------------------- 1.22s 2026-03-26 02:44:19.191474 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.19s 2026-03-26 02:44:19.191482 | orchestrator | ovn-db : Wait for ovn-nb-db --------------------------------------------- 1.18s 2026-03-26 02:44:19.548288 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-03-26 02:44:19.548388 | orchestrator | + sh -c /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh 2026-03-26 02:44:21.818424 | orchestrator | 2026-03-26 02:44:21 | INFO  | Trying to run play wipe-partitions in environment custom 2026-03-26 02:44:31.936487 | orchestrator | 2026-03-26 02:44:31 | INFO  | Task 89ea481d-9381-42c6-ad2c-7d8c23c0aae3 (wipe-partitions) was prepared for execution. 2026-03-26 02:44:31.936604 | orchestrator | 2026-03-26 02:44:31 | INFO  | It takes a moment until task 89ea481d-9381-42c6-ad2c-7d8c23c0aae3 (wipe-partitions) has been started and output is visible here. 2026-03-26 02:44:45.916994 | orchestrator | 2026-03-26 02:44:45.917210 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2026-03-26 02:44:45.917242 | orchestrator | 2026-03-26 02:44:45.917263 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2026-03-26 02:44:45.917282 | orchestrator | Thursday 26 March 2026 02:44:36 +0000 (0:00:00.153) 0:00:00.153 ******** 2026-03-26 02:44:45.917339 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:44:45.917363 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:44:45.917381 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:44:45.917400 | orchestrator | 2026-03-26 02:44:45.917419 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2026-03-26 02:44:45.917439 | orchestrator | Thursday 26 March 2026 02:44:36 +0000 (0:00:00.590) 0:00:00.744 ******** 2026-03-26 02:44:45.917458 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:44:45.917473 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:44:45.917484 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:44:45.917495 | orchestrator | 2026-03-26 02:44:45.917506 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2026-03-26 02:44:45.917520 | orchestrator | Thursday 26 March 2026 02:44:37 +0000 (0:00:00.398) 0:00:01.143 ******** 2026-03-26 02:44:45.917533 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:44:45.917546 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:44:45.917558 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:44:45.917570 | orchestrator | 2026-03-26 02:44:45.917583 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2026-03-26 02:44:45.917596 | orchestrator | Thursday 26 March 2026 02:44:38 +0000 (0:00:00.637) 0:00:01.780 ******** 2026-03-26 02:44:45.917608 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:44:45.917621 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:44:45.917635 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:44:45.917648 | orchestrator | 2026-03-26 02:44:45.917661 | orchestrator | TASK [Check device availability] *********************************************** 2026-03-26 02:44:45.917673 | orchestrator | Thursday 26 March 2026 02:44:38 +0000 (0:00:00.281) 0:00:02.062 ******** 2026-03-26 02:44:45.917685 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-03-26 02:44:45.917714 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-03-26 02:44:45.917726 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-03-26 02:44:45.917736 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-03-26 02:44:45.917747 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-03-26 02:44:45.917758 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-03-26 02:44:45.917784 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-03-26 02:44:45.917796 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-03-26 02:44:45.917807 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-03-26 02:44:45.917818 | orchestrator | 2026-03-26 02:44:45.917829 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2026-03-26 02:44:45.917840 | orchestrator | Thursday 26 March 2026 02:44:39 +0000 (0:00:01.215) 0:00:03.278 ******** 2026-03-26 02:44:45.917851 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2026-03-26 02:44:45.917862 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2026-03-26 02:44:45.917873 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2026-03-26 02:44:45.917884 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2026-03-26 02:44:45.917895 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2026-03-26 02:44:45.917906 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2026-03-26 02:44:45.917917 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2026-03-26 02:44:45.917928 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2026-03-26 02:44:45.917938 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2026-03-26 02:44:45.917949 | orchestrator | 2026-03-26 02:44:45.917960 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2026-03-26 02:44:45.917971 | orchestrator | Thursday 26 March 2026 02:44:41 +0000 (0:00:01.629) 0:00:04.907 ******** 2026-03-26 02:44:45.917982 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-03-26 02:44:45.917993 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-03-26 02:44:45.918097 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-03-26 02:44:45.918112 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-03-26 02:44:45.918134 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-03-26 02:44:45.918145 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-03-26 02:44:45.918156 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-03-26 02:44:45.918167 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-03-26 02:44:45.918178 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-03-26 02:44:45.918189 | orchestrator | 2026-03-26 02:44:45.918200 | orchestrator | TASK [Reload udev rules] ******************************************************* 2026-03-26 02:44:45.918211 | orchestrator | Thursday 26 March 2026 02:44:44 +0000 (0:00:03.037) 0:00:07.944 ******** 2026-03-26 02:44:45.918222 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:44:45.918233 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:44:45.918244 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:44:45.918255 | orchestrator | 2026-03-26 02:44:45.918266 | orchestrator | TASK [Request device events from the kernel] *********************************** 2026-03-26 02:44:45.918277 | orchestrator | Thursday 26 March 2026 02:44:44 +0000 (0:00:00.649) 0:00:08.594 ******** 2026-03-26 02:44:45.918288 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:44:45.918299 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:44:45.918310 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:44:45.918321 | orchestrator | 2026-03-26 02:44:45.918332 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-26 02:44:45.918348 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-26 02:44:45.918369 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-26 02:44:45.918415 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-26 02:44:45.918437 | orchestrator | 2026-03-26 02:44:45.918455 | orchestrator | 2026-03-26 02:44:45.918475 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-26 02:44:45.918487 | orchestrator | Thursday 26 March 2026 02:44:45 +0000 (0:00:00.645) 0:00:09.240 ******** 2026-03-26 02:44:45.918498 | orchestrator | =============================================================================== 2026-03-26 02:44:45.918509 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 3.04s 2026-03-26 02:44:45.918520 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.63s 2026-03-26 02:44:45.918531 | orchestrator | Check device availability ----------------------------------------------- 1.22s 2026-03-26 02:44:45.918542 | orchestrator | Reload udev rules ------------------------------------------------------- 0.65s 2026-03-26 02:44:45.918553 | orchestrator | Request device events from the kernel ----------------------------------- 0.65s 2026-03-26 02:44:45.918564 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.64s 2026-03-26 02:44:45.918575 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.59s 2026-03-26 02:44:45.918585 | orchestrator | Remove all rook related logical devices --------------------------------- 0.40s 2026-03-26 02:44:45.918596 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.28s 2026-03-26 02:44:58.622373 | orchestrator | 2026-03-26 02:44:58 | INFO  | Task a5e1532e-72fd-419b-9693-6cea10c48d87 (facts) was prepared for execution. 2026-03-26 02:44:58.622490 | orchestrator | 2026-03-26 02:44:58 | INFO  | It takes a moment until task a5e1532e-72fd-419b-9693-6cea10c48d87 (facts) has been started and output is visible here. 2026-03-26 02:45:12.985645 | orchestrator | 2026-03-26 02:45:12.985768 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-03-26 02:45:12.985787 | orchestrator | 2026-03-26 02:45:12.985799 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-26 02:45:12.985811 | orchestrator | Thursday 26 March 2026 02:45:03 +0000 (0:00:00.297) 0:00:00.297 ******** 2026-03-26 02:45:12.985848 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:45:12.985861 | orchestrator | ok: [testbed-manager] 2026-03-26 02:45:12.985873 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:45:12.985884 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:45:12.985895 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:45:12.985906 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:45:12.985916 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:45:12.985927 | orchestrator | 2026-03-26 02:45:12.985939 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-26 02:45:12.985951 | orchestrator | Thursday 26 March 2026 02:45:04 +0000 (0:00:01.137) 0:00:01.434 ******** 2026-03-26 02:45:12.985963 | orchestrator | skipping: [testbed-manager] 2026-03-26 02:45:12.985979 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:45:12.985998 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:45:12.986117 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:45:12.986133 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:45:12.986146 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:45:12.986158 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:45:12.986171 | orchestrator | 2026-03-26 02:45:12.986184 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-26 02:45:12.986196 | orchestrator | 2026-03-26 02:45:12.986209 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-26 02:45:12.986221 | orchestrator | Thursday 26 March 2026 02:45:05 +0000 (0:00:01.370) 0:00:02.805 ******** 2026-03-26 02:45:12.986233 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:45:12.986246 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:45:12.986258 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:45:12.986270 | orchestrator | ok: [testbed-manager] 2026-03-26 02:45:12.986282 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:45:12.986294 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:45:12.986307 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:45:12.986319 | orchestrator | 2026-03-26 02:45:12.986332 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-26 02:45:12.986344 | orchestrator | 2026-03-26 02:45:12.986359 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-26 02:45:12.986378 | orchestrator | Thursday 26 March 2026 02:45:11 +0000 (0:00:06.124) 0:00:08.929 ******** 2026-03-26 02:45:12.986397 | orchestrator | skipping: [testbed-manager] 2026-03-26 02:45:12.986417 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:45:12.986436 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:45:12.986453 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:45:12.986466 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:45:12.986479 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:45:12.986491 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:45:12.986504 | orchestrator | 2026-03-26 02:45:12.986515 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-26 02:45:12.986526 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-26 02:45:12.986598 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-26 02:45:12.986621 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-26 02:45:12.986641 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-26 02:45:12.986655 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-26 02:45:12.986666 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-26 02:45:12.986689 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-26 02:45:12.986701 | orchestrator | 2026-03-26 02:45:12.986712 | orchestrator | 2026-03-26 02:45:12.986723 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-26 02:45:12.986734 | orchestrator | Thursday 26 March 2026 02:45:12 +0000 (0:00:00.597) 0:00:09.527 ******** 2026-03-26 02:45:12.986745 | orchestrator | =============================================================================== 2026-03-26 02:45:12.986756 | orchestrator | Gathers facts about hosts ----------------------------------------------- 6.12s 2026-03-26 02:45:12.986767 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.37s 2026-03-26 02:45:12.986778 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.14s 2026-03-26 02:45:12.986789 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.60s 2026-03-26 02:45:15.625757 | orchestrator | 2026-03-26 02:45:15 | INFO  | Task b695aa9f-dffc-4083-94dc-70d9d0de54bc (ceph-configure-lvm-volumes) was prepared for execution. 2026-03-26 02:45:15.625859 | orchestrator | 2026-03-26 02:45:15 | INFO  | It takes a moment until task b695aa9f-dffc-4083-94dc-70d9d0de54bc (ceph-configure-lvm-volumes) has been started and output is visible here. 2026-03-26 02:45:29.419227 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-26 02:45:29.419340 | orchestrator | 2.16.14 2026-03-26 02:45:29.419350 | orchestrator | 2026-03-26 02:45:29.419357 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-03-26 02:45:29.419363 | orchestrator | 2026-03-26 02:45:29.419369 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-26 02:45:29.419374 | orchestrator | Thursday 26 March 2026 02:45:20 +0000 (0:00:00.371) 0:00:00.371 ******** 2026-03-26 02:45:29.419380 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-26 02:45:29.419386 | orchestrator | 2026-03-26 02:45:29.419403 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-26 02:45:29.419409 | orchestrator | Thursday 26 March 2026 02:45:20 +0000 (0:00:00.281) 0:00:00.653 ******** 2026-03-26 02:45:29.419414 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:45:29.419419 | orchestrator | 2026-03-26 02:45:29.419424 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-26 02:45:29.419429 | orchestrator | Thursday 26 March 2026 02:45:21 +0000 (0:00:00.270) 0:00:00.924 ******** 2026-03-26 02:45:29.419434 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-03-26 02:45:29.419439 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-03-26 02:45:29.419444 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-03-26 02:45:29.419449 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-03-26 02:45:29.419454 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-03-26 02:45:29.419459 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-03-26 02:45:29.419464 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-03-26 02:45:29.419469 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-03-26 02:45:29.419474 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-03-26 02:45:29.419479 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-03-26 02:45:29.419484 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-03-26 02:45:29.419489 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-03-26 02:45:29.419514 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-03-26 02:45:29.419523 | orchestrator | 2026-03-26 02:45:29.419530 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-26 02:45:29.419537 | orchestrator | Thursday 26 March 2026 02:45:21 +0000 (0:00:00.553) 0:00:01.477 ******** 2026-03-26 02:45:29.419545 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:45:29.419553 | orchestrator | 2026-03-26 02:45:29.419561 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-26 02:45:29.419568 | orchestrator | Thursday 26 March 2026 02:45:21 +0000 (0:00:00.248) 0:00:01.726 ******** 2026-03-26 02:45:29.419575 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:45:29.419583 | orchestrator | 2026-03-26 02:45:29.419590 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-26 02:45:29.419598 | orchestrator | Thursday 26 March 2026 02:45:22 +0000 (0:00:00.224) 0:00:01.950 ******** 2026-03-26 02:45:29.419606 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:45:29.419613 | orchestrator | 2026-03-26 02:45:29.419621 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-26 02:45:29.419630 | orchestrator | Thursday 26 March 2026 02:45:22 +0000 (0:00:00.206) 0:00:02.156 ******** 2026-03-26 02:45:29.419638 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:45:29.419645 | orchestrator | 2026-03-26 02:45:29.419654 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-26 02:45:29.419663 | orchestrator | Thursday 26 March 2026 02:45:22 +0000 (0:00:00.215) 0:00:02.372 ******** 2026-03-26 02:45:29.419668 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:45:29.419673 | orchestrator | 2026-03-26 02:45:29.419678 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-26 02:45:29.419683 | orchestrator | Thursday 26 March 2026 02:45:22 +0000 (0:00:00.218) 0:00:02.591 ******** 2026-03-26 02:45:29.419688 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:45:29.419693 | orchestrator | 2026-03-26 02:45:29.419698 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-26 02:45:29.419702 | orchestrator | Thursday 26 March 2026 02:45:23 +0000 (0:00:00.245) 0:00:02.836 ******** 2026-03-26 02:45:29.419708 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:45:29.419714 | orchestrator | 2026-03-26 02:45:29.419719 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-26 02:45:29.419725 | orchestrator | Thursday 26 March 2026 02:45:23 +0000 (0:00:00.220) 0:00:03.057 ******** 2026-03-26 02:45:29.419731 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:45:29.419736 | orchestrator | 2026-03-26 02:45:29.419742 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-26 02:45:29.419747 | orchestrator | Thursday 26 March 2026 02:45:23 +0000 (0:00:00.226) 0:00:03.283 ******** 2026-03-26 02:45:29.419753 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_ce600cf2-62c4-44aa-8248-5535335c6519) 2026-03-26 02:45:29.419760 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_ce600cf2-62c4-44aa-8248-5535335c6519) 2026-03-26 02:45:29.419766 | orchestrator | 2026-03-26 02:45:29.419771 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-26 02:45:29.419791 | orchestrator | Thursday 26 March 2026 02:45:24 +0000 (0:00:00.708) 0:00:03.992 ******** 2026-03-26 02:45:29.419797 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_d11e4e4a-db1d-44df-8da9-5de7e993dd80) 2026-03-26 02:45:29.419803 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_d11e4e4a-db1d-44df-8da9-5de7e993dd80) 2026-03-26 02:45:29.419808 | orchestrator | 2026-03-26 02:45:29.419813 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-26 02:45:29.419818 | orchestrator | Thursday 26 March 2026 02:45:24 +0000 (0:00:00.720) 0:00:04.712 ******** 2026-03-26 02:45:29.419827 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_863ba5d2-7e2f-4393-95a6-83543745d331) 2026-03-26 02:45:29.419838 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_863ba5d2-7e2f-4393-95a6-83543745d331) 2026-03-26 02:45:29.419843 | orchestrator | 2026-03-26 02:45:29.419848 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-26 02:45:29.419853 | orchestrator | Thursday 26 March 2026 02:45:25 +0000 (0:00:01.090) 0:00:05.802 ******** 2026-03-26 02:45:29.419858 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_2dae49df-17cb-48b5-9940-ec5e7ec792d8) 2026-03-26 02:45:29.419862 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_2dae49df-17cb-48b5-9940-ec5e7ec792d8) 2026-03-26 02:45:29.419867 | orchestrator | 2026-03-26 02:45:29.419872 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-26 02:45:29.419877 | orchestrator | Thursday 26 March 2026 02:45:26 +0000 (0:00:00.496) 0:00:06.299 ******** 2026-03-26 02:45:29.419882 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-26 02:45:29.419887 | orchestrator | 2026-03-26 02:45:29.419892 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-26 02:45:29.419897 | orchestrator | Thursday 26 March 2026 02:45:26 +0000 (0:00:00.382) 0:00:06.682 ******** 2026-03-26 02:45:29.419902 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-03-26 02:45:29.419907 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-03-26 02:45:29.419911 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-03-26 02:45:29.419916 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-03-26 02:45:29.419921 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-03-26 02:45:29.419926 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-03-26 02:45:29.419931 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-03-26 02:45:29.419935 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-03-26 02:45:29.419940 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-03-26 02:45:29.419945 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-03-26 02:45:29.419950 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-03-26 02:45:29.419955 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-03-26 02:45:29.419959 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-03-26 02:45:29.419964 | orchestrator | 2026-03-26 02:45:29.419969 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-26 02:45:29.419974 | orchestrator | Thursday 26 March 2026 02:45:27 +0000 (0:00:00.446) 0:00:07.128 ******** 2026-03-26 02:45:29.419979 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:45:29.419984 | orchestrator | 2026-03-26 02:45:29.419988 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-26 02:45:29.419993 | orchestrator | Thursday 26 March 2026 02:45:27 +0000 (0:00:00.238) 0:00:07.367 ******** 2026-03-26 02:45:29.419998 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:45:29.420003 | orchestrator | 2026-03-26 02:45:29.420008 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-26 02:45:29.420013 | orchestrator | Thursday 26 March 2026 02:45:27 +0000 (0:00:00.216) 0:00:07.583 ******** 2026-03-26 02:45:29.420017 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:45:29.420022 | orchestrator | 2026-03-26 02:45:29.420027 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-26 02:45:29.420032 | orchestrator | Thursday 26 March 2026 02:45:28 +0000 (0:00:00.228) 0:00:07.812 ******** 2026-03-26 02:45:29.420041 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:45:29.420046 | orchestrator | 2026-03-26 02:45:29.420051 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-26 02:45:29.420055 | orchestrator | Thursday 26 March 2026 02:45:28 +0000 (0:00:00.222) 0:00:08.035 ******** 2026-03-26 02:45:29.420060 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:45:29.420082 | orchestrator | 2026-03-26 02:45:29.420087 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-26 02:45:29.420092 | orchestrator | Thursday 26 March 2026 02:45:28 +0000 (0:00:00.244) 0:00:08.279 ******** 2026-03-26 02:45:29.420097 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:45:29.420102 | orchestrator | 2026-03-26 02:45:29.420106 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-26 02:45:29.420111 | orchestrator | Thursday 26 March 2026 02:45:29 +0000 (0:00:00.681) 0:00:08.960 ******** 2026-03-26 02:45:29.420116 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:45:29.420121 | orchestrator | 2026-03-26 02:45:29.420129 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-26 02:45:37.479546 | orchestrator | Thursday 26 March 2026 02:45:29 +0000 (0:00:00.256) 0:00:09.216 ******** 2026-03-26 02:45:37.479634 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:45:37.479644 | orchestrator | 2026-03-26 02:45:37.479650 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-26 02:45:37.479656 | orchestrator | Thursday 26 March 2026 02:45:29 +0000 (0:00:00.211) 0:00:09.428 ******** 2026-03-26 02:45:37.479662 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-03-26 02:45:37.479668 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-03-26 02:45:37.479674 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-03-26 02:45:37.479691 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-03-26 02:45:37.479697 | orchestrator | 2026-03-26 02:45:37.479702 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-26 02:45:37.479708 | orchestrator | Thursday 26 March 2026 02:45:30 +0000 (0:00:00.783) 0:00:10.211 ******** 2026-03-26 02:45:37.479713 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:45:37.479718 | orchestrator | 2026-03-26 02:45:37.479723 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-26 02:45:37.479728 | orchestrator | Thursday 26 March 2026 02:45:30 +0000 (0:00:00.233) 0:00:10.445 ******** 2026-03-26 02:45:37.479734 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:45:37.479739 | orchestrator | 2026-03-26 02:45:37.479744 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-26 02:45:37.479749 | orchestrator | Thursday 26 March 2026 02:45:30 +0000 (0:00:00.232) 0:00:10.677 ******** 2026-03-26 02:45:37.479754 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:45:37.479759 | orchestrator | 2026-03-26 02:45:37.479765 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-26 02:45:37.479770 | orchestrator | Thursday 26 March 2026 02:45:31 +0000 (0:00:00.216) 0:00:10.894 ******** 2026-03-26 02:45:37.479775 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:45:37.479780 | orchestrator | 2026-03-26 02:45:37.479785 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-03-26 02:45:37.479790 | orchestrator | Thursday 26 March 2026 02:45:31 +0000 (0:00:00.232) 0:00:11.126 ******** 2026-03-26 02:45:37.479796 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2026-03-26 02:45:37.479801 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2026-03-26 02:45:37.479806 | orchestrator | 2026-03-26 02:45:37.479811 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-03-26 02:45:37.479816 | orchestrator | Thursday 26 March 2026 02:45:31 +0000 (0:00:00.200) 0:00:11.326 ******** 2026-03-26 02:45:37.479821 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:45:37.479826 | orchestrator | 2026-03-26 02:45:37.479831 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-03-26 02:45:37.479837 | orchestrator | Thursday 26 March 2026 02:45:31 +0000 (0:00:00.177) 0:00:11.504 ******** 2026-03-26 02:45:37.479857 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:45:37.479863 | orchestrator | 2026-03-26 02:45:37.479868 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-03-26 02:45:37.479873 | orchestrator | Thursday 26 March 2026 02:45:31 +0000 (0:00:00.203) 0:00:11.708 ******** 2026-03-26 02:45:37.479878 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:45:37.479884 | orchestrator | 2026-03-26 02:45:37.479889 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-03-26 02:45:37.479894 | orchestrator | Thursday 26 March 2026 02:45:32 +0000 (0:00:00.396) 0:00:12.105 ******** 2026-03-26 02:45:37.479899 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:45:37.479904 | orchestrator | 2026-03-26 02:45:37.479909 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-03-26 02:45:37.479914 | orchestrator | Thursday 26 March 2026 02:45:32 +0000 (0:00:00.159) 0:00:12.264 ******** 2026-03-26 02:45:37.479920 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '93e8c9a2-b6ff-5fe0-a79e-2922336c3e0a'}}) 2026-03-26 02:45:37.479926 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'e2623153-bc41-510f-8884-ef957bb96082'}}) 2026-03-26 02:45:37.479931 | orchestrator | 2026-03-26 02:45:37.479936 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-03-26 02:45:37.479942 | orchestrator | Thursday 26 March 2026 02:45:32 +0000 (0:00:00.208) 0:00:12.473 ******** 2026-03-26 02:45:37.479947 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '93e8c9a2-b6ff-5fe0-a79e-2922336c3e0a'}})  2026-03-26 02:45:37.479954 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'e2623153-bc41-510f-8884-ef957bb96082'}})  2026-03-26 02:45:37.479959 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:45:37.479964 | orchestrator | 2026-03-26 02:45:37.479970 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-03-26 02:45:37.479975 | orchestrator | Thursday 26 March 2026 02:45:32 +0000 (0:00:00.161) 0:00:12.634 ******** 2026-03-26 02:45:37.479980 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '93e8c9a2-b6ff-5fe0-a79e-2922336c3e0a'}})  2026-03-26 02:45:37.479985 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'e2623153-bc41-510f-8884-ef957bb96082'}})  2026-03-26 02:45:37.479990 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:45:37.479995 | orchestrator | 2026-03-26 02:45:37.480000 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-03-26 02:45:37.480005 | orchestrator | Thursday 26 March 2026 02:45:33 +0000 (0:00:00.184) 0:00:12.818 ******** 2026-03-26 02:45:37.480011 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '93e8c9a2-b6ff-5fe0-a79e-2922336c3e0a'}})  2026-03-26 02:45:37.480026 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'e2623153-bc41-510f-8884-ef957bb96082'}})  2026-03-26 02:45:37.480032 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:45:37.480037 | orchestrator | 2026-03-26 02:45:37.480043 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-03-26 02:45:37.480048 | orchestrator | Thursday 26 March 2026 02:45:33 +0000 (0:00:00.175) 0:00:12.993 ******** 2026-03-26 02:45:37.480069 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:45:37.480106 | orchestrator | 2026-03-26 02:45:37.480115 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-03-26 02:45:37.480124 | orchestrator | Thursday 26 March 2026 02:45:33 +0000 (0:00:00.149) 0:00:13.142 ******** 2026-03-26 02:45:37.480130 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:45:37.480136 | orchestrator | 2026-03-26 02:45:37.480142 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-03-26 02:45:37.480148 | orchestrator | Thursday 26 March 2026 02:45:33 +0000 (0:00:00.144) 0:00:13.286 ******** 2026-03-26 02:45:37.480159 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:45:37.480165 | orchestrator | 2026-03-26 02:45:37.480171 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-03-26 02:45:37.480177 | orchestrator | Thursday 26 March 2026 02:45:33 +0000 (0:00:00.151) 0:00:13.438 ******** 2026-03-26 02:45:37.480182 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:45:37.480188 | orchestrator | 2026-03-26 02:45:37.480194 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-03-26 02:45:37.480200 | orchestrator | Thursday 26 March 2026 02:45:33 +0000 (0:00:00.139) 0:00:13.577 ******** 2026-03-26 02:45:37.480206 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:45:37.480212 | orchestrator | 2026-03-26 02:45:37.480217 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-03-26 02:45:37.480223 | orchestrator | Thursday 26 March 2026 02:45:33 +0000 (0:00:00.129) 0:00:13.707 ******** 2026-03-26 02:45:37.480229 | orchestrator | ok: [testbed-node-3] => { 2026-03-26 02:45:37.480235 | orchestrator |  "ceph_osd_devices": { 2026-03-26 02:45:37.480241 | orchestrator |  "sdb": { 2026-03-26 02:45:37.480247 | orchestrator |  "osd_lvm_uuid": "93e8c9a2-b6ff-5fe0-a79e-2922336c3e0a" 2026-03-26 02:45:37.480253 | orchestrator |  }, 2026-03-26 02:45:37.480259 | orchestrator |  "sdc": { 2026-03-26 02:45:37.480265 | orchestrator |  "osd_lvm_uuid": "e2623153-bc41-510f-8884-ef957bb96082" 2026-03-26 02:45:37.480270 | orchestrator |  } 2026-03-26 02:45:37.480276 | orchestrator |  } 2026-03-26 02:45:37.480282 | orchestrator | } 2026-03-26 02:45:37.480288 | orchestrator | 2026-03-26 02:45:37.480294 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-03-26 02:45:37.480300 | orchestrator | Thursday 26 March 2026 02:45:34 +0000 (0:00:00.391) 0:00:14.098 ******** 2026-03-26 02:45:37.480305 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:45:37.480311 | orchestrator | 2026-03-26 02:45:37.480316 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-03-26 02:45:37.480322 | orchestrator | Thursday 26 March 2026 02:45:34 +0000 (0:00:00.154) 0:00:14.254 ******** 2026-03-26 02:45:37.480328 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:45:37.480334 | orchestrator | 2026-03-26 02:45:37.480339 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-03-26 02:45:37.480345 | orchestrator | Thursday 26 March 2026 02:45:34 +0000 (0:00:00.134) 0:00:14.388 ******** 2026-03-26 02:45:37.480351 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:45:37.480357 | orchestrator | 2026-03-26 02:45:37.480363 | orchestrator | TASK [Print configuration data] ************************************************ 2026-03-26 02:45:37.480369 | orchestrator | Thursday 26 March 2026 02:45:34 +0000 (0:00:00.151) 0:00:14.540 ******** 2026-03-26 02:45:37.480374 | orchestrator | changed: [testbed-node-3] => { 2026-03-26 02:45:37.480380 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-03-26 02:45:37.480386 | orchestrator |  "ceph_osd_devices": { 2026-03-26 02:45:37.480392 | orchestrator |  "sdb": { 2026-03-26 02:45:37.480398 | orchestrator |  "osd_lvm_uuid": "93e8c9a2-b6ff-5fe0-a79e-2922336c3e0a" 2026-03-26 02:45:37.480404 | orchestrator |  }, 2026-03-26 02:45:37.480410 | orchestrator |  "sdc": { 2026-03-26 02:45:37.480416 | orchestrator |  "osd_lvm_uuid": "e2623153-bc41-510f-8884-ef957bb96082" 2026-03-26 02:45:37.480422 | orchestrator |  } 2026-03-26 02:45:37.480427 | orchestrator |  }, 2026-03-26 02:45:37.480433 | orchestrator |  "lvm_volumes": [ 2026-03-26 02:45:37.480439 | orchestrator |  { 2026-03-26 02:45:37.480444 | orchestrator |  "data": "osd-block-93e8c9a2-b6ff-5fe0-a79e-2922336c3e0a", 2026-03-26 02:45:37.480450 | orchestrator |  "data_vg": "ceph-93e8c9a2-b6ff-5fe0-a79e-2922336c3e0a" 2026-03-26 02:45:37.480456 | orchestrator |  }, 2026-03-26 02:45:37.480462 | orchestrator |  { 2026-03-26 02:45:37.480468 | orchestrator |  "data": "osd-block-e2623153-bc41-510f-8884-ef957bb96082", 2026-03-26 02:45:37.480477 | orchestrator |  "data_vg": "ceph-e2623153-bc41-510f-8884-ef957bb96082" 2026-03-26 02:45:37.480483 | orchestrator |  } 2026-03-26 02:45:37.480488 | orchestrator |  ] 2026-03-26 02:45:37.480493 | orchestrator |  } 2026-03-26 02:45:37.480498 | orchestrator | } 2026-03-26 02:45:37.480503 | orchestrator | 2026-03-26 02:45:37.480509 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-03-26 02:45:37.480514 | orchestrator | Thursday 26 March 2026 02:45:34 +0000 (0:00:00.232) 0:00:14.772 ******** 2026-03-26 02:45:37.480519 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-26 02:45:37.480524 | orchestrator | 2026-03-26 02:45:37.480529 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-03-26 02:45:37.480534 | orchestrator | 2026-03-26 02:45:37.480539 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-26 02:45:37.480544 | orchestrator | Thursday 26 March 2026 02:45:36 +0000 (0:00:01.998) 0:00:16.770 ******** 2026-03-26 02:45:37.480549 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-03-26 02:45:37.480554 | orchestrator | 2026-03-26 02:45:37.480559 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-26 02:45:37.480564 | orchestrator | Thursday 26 March 2026 02:45:37 +0000 (0:00:00.269) 0:00:17.040 ******** 2026-03-26 02:45:37.480569 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:45:37.480575 | orchestrator | 2026-03-26 02:45:37.480583 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-26 02:45:46.804599 | orchestrator | Thursday 26 March 2026 02:45:37 +0000 (0:00:00.241) 0:00:17.282 ******** 2026-03-26 02:45:46.804704 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-03-26 02:45:46.804716 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-03-26 02:45:46.804722 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-03-26 02:45:46.804739 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-03-26 02:45:46.804743 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-03-26 02:45:46.804747 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-03-26 02:45:46.804751 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-03-26 02:45:46.804755 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-03-26 02:45:46.804759 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-03-26 02:45:46.804764 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-03-26 02:45:46.804768 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-03-26 02:45:46.804780 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-03-26 02:45:46.804784 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-03-26 02:45:46.804789 | orchestrator | 2026-03-26 02:45:46.804793 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-26 02:45:46.804797 | orchestrator | Thursday 26 March 2026 02:45:38 +0000 (0:00:00.652) 0:00:17.934 ******** 2026-03-26 02:45:46.804801 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:45:46.804806 | orchestrator | 2026-03-26 02:45:46.804810 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-26 02:45:46.804814 | orchestrator | Thursday 26 March 2026 02:45:38 +0000 (0:00:00.239) 0:00:18.174 ******** 2026-03-26 02:45:46.804818 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:45:46.804822 | orchestrator | 2026-03-26 02:45:46.804826 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-26 02:45:46.804830 | orchestrator | Thursday 26 March 2026 02:45:38 +0000 (0:00:00.243) 0:00:18.417 ******** 2026-03-26 02:45:46.804849 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:45:46.804854 | orchestrator | 2026-03-26 02:45:46.804860 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-26 02:45:46.804866 | orchestrator | Thursday 26 March 2026 02:45:38 +0000 (0:00:00.228) 0:00:18.646 ******** 2026-03-26 02:45:46.804872 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:45:46.804878 | orchestrator | 2026-03-26 02:45:46.804883 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-26 02:45:46.804889 | orchestrator | Thursday 26 March 2026 02:45:39 +0000 (0:00:00.221) 0:00:18.868 ******** 2026-03-26 02:45:46.804895 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:45:46.804902 | orchestrator | 2026-03-26 02:45:46.804907 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-26 02:45:46.804913 | orchestrator | Thursday 26 March 2026 02:45:39 +0000 (0:00:00.233) 0:00:19.101 ******** 2026-03-26 02:45:46.804920 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:45:46.804926 | orchestrator | 2026-03-26 02:45:46.804933 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-26 02:45:46.804939 | orchestrator | Thursday 26 March 2026 02:45:39 +0000 (0:00:00.210) 0:00:19.312 ******** 2026-03-26 02:45:46.804945 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:45:46.804951 | orchestrator | 2026-03-26 02:45:46.804955 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-26 02:45:46.804959 | orchestrator | Thursday 26 March 2026 02:45:39 +0000 (0:00:00.212) 0:00:19.525 ******** 2026-03-26 02:45:46.804963 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:45:46.804966 | orchestrator | 2026-03-26 02:45:46.804970 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-26 02:45:46.804974 | orchestrator | Thursday 26 March 2026 02:45:39 +0000 (0:00:00.212) 0:00:19.738 ******** 2026-03-26 02:45:46.804978 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_48d73a84-835d-480a-92c3-3edf7ed142ea) 2026-03-26 02:45:46.804983 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_48d73a84-835d-480a-92c3-3edf7ed142ea) 2026-03-26 02:45:46.804987 | orchestrator | 2026-03-26 02:45:46.804991 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-26 02:45:46.804995 | orchestrator | Thursday 26 March 2026 02:45:40 +0000 (0:00:00.700) 0:00:20.438 ******** 2026-03-26 02:45:46.804999 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_7db5f133-fe7b-42a4-ad57-b076dc1856ab) 2026-03-26 02:45:46.805002 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_7db5f133-fe7b-42a4-ad57-b076dc1856ab) 2026-03-26 02:45:46.805006 | orchestrator | 2026-03-26 02:45:46.805010 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-26 02:45:46.805014 | orchestrator | Thursday 26 March 2026 02:45:41 +0000 (0:00:00.735) 0:00:21.173 ******** 2026-03-26 02:45:46.805018 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_a52ec37c-b4ea-4f83-9b16-3c0f6ce85263) 2026-03-26 02:45:46.805021 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_a52ec37c-b4ea-4f83-9b16-3c0f6ce85263) 2026-03-26 02:45:46.805025 | orchestrator | 2026-03-26 02:45:46.805029 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-26 02:45:46.805046 | orchestrator | Thursday 26 March 2026 02:45:42 +0000 (0:00:00.966) 0:00:22.140 ******** 2026-03-26 02:45:46.805050 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_7e352b46-e023-45cf-8a88-51cc46240a44) 2026-03-26 02:45:46.805054 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_7e352b46-e023-45cf-8a88-51cc46240a44) 2026-03-26 02:45:46.805058 | orchestrator | 2026-03-26 02:45:46.805061 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-26 02:45:46.805069 | orchestrator | Thursday 26 March 2026 02:45:42 +0000 (0:00:00.480) 0:00:22.620 ******** 2026-03-26 02:45:46.805073 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-26 02:45:46.805084 | orchestrator | 2026-03-26 02:45:46.805133 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-26 02:45:46.805138 | orchestrator | Thursday 26 March 2026 02:45:43 +0000 (0:00:00.368) 0:00:22.989 ******** 2026-03-26 02:45:46.805141 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-03-26 02:45:46.805145 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-03-26 02:45:46.805149 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-03-26 02:45:46.805153 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-03-26 02:45:46.805156 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-03-26 02:45:46.805160 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-03-26 02:45:46.805164 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-03-26 02:45:46.805168 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-03-26 02:45:46.805171 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-03-26 02:45:46.805175 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-03-26 02:45:46.805179 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-03-26 02:45:46.805183 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-03-26 02:45:46.805187 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-03-26 02:45:46.805191 | orchestrator | 2026-03-26 02:45:46.805194 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-26 02:45:46.805198 | orchestrator | Thursday 26 March 2026 02:45:43 +0000 (0:00:00.424) 0:00:23.414 ******** 2026-03-26 02:45:46.805202 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:45:46.805206 | orchestrator | 2026-03-26 02:45:46.805209 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-26 02:45:46.805213 | orchestrator | Thursday 26 March 2026 02:45:43 +0000 (0:00:00.219) 0:00:23.634 ******** 2026-03-26 02:45:46.805217 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:45:46.805221 | orchestrator | 2026-03-26 02:45:46.805225 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-26 02:45:46.805229 | orchestrator | Thursday 26 March 2026 02:45:44 +0000 (0:00:00.208) 0:00:23.842 ******** 2026-03-26 02:45:46.805232 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:45:46.805236 | orchestrator | 2026-03-26 02:45:46.805240 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-26 02:45:46.805244 | orchestrator | Thursday 26 March 2026 02:45:44 +0000 (0:00:00.206) 0:00:24.048 ******** 2026-03-26 02:45:46.805248 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:45:46.805251 | orchestrator | 2026-03-26 02:45:46.805255 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-26 02:45:46.805259 | orchestrator | Thursday 26 March 2026 02:45:44 +0000 (0:00:00.230) 0:00:24.279 ******** 2026-03-26 02:45:46.805263 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:45:46.805266 | orchestrator | 2026-03-26 02:45:46.805270 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-26 02:45:46.805274 | orchestrator | Thursday 26 March 2026 02:45:44 +0000 (0:00:00.220) 0:00:24.500 ******** 2026-03-26 02:45:46.805278 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:45:46.805282 | orchestrator | 2026-03-26 02:45:46.805285 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-26 02:45:46.805289 | orchestrator | Thursday 26 March 2026 02:45:44 +0000 (0:00:00.218) 0:00:24.719 ******** 2026-03-26 02:45:46.805293 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:45:46.805303 | orchestrator | 2026-03-26 02:45:46.805307 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-26 02:45:46.805311 | orchestrator | Thursday 26 March 2026 02:45:45 +0000 (0:00:00.212) 0:00:24.931 ******** 2026-03-26 02:45:46.805315 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:45:46.805319 | orchestrator | 2026-03-26 02:45:46.805322 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-26 02:45:46.805326 | orchestrator | Thursday 26 March 2026 02:45:45 +0000 (0:00:00.726) 0:00:25.658 ******** 2026-03-26 02:45:46.805330 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-03-26 02:45:46.805335 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-03-26 02:45:46.805339 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-03-26 02:45:46.805343 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-03-26 02:45:46.805347 | orchestrator | 2026-03-26 02:45:46.805350 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-26 02:45:46.805354 | orchestrator | Thursday 26 March 2026 02:45:46 +0000 (0:00:00.727) 0:00:26.386 ******** 2026-03-26 02:45:46.805358 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:45:53.122359 | orchestrator | 2026-03-26 02:45:53.122458 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-26 02:45:53.122469 | orchestrator | Thursday 26 March 2026 02:45:46 +0000 (0:00:00.221) 0:00:26.607 ******** 2026-03-26 02:45:53.122474 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:45:53.122479 | orchestrator | 2026-03-26 02:45:53.122483 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-26 02:45:53.122488 | orchestrator | Thursday 26 March 2026 02:45:47 +0000 (0:00:00.224) 0:00:26.832 ******** 2026-03-26 02:45:53.122504 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:45:53.122509 | orchestrator | 2026-03-26 02:45:53.122514 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-26 02:45:53.122518 | orchestrator | Thursday 26 March 2026 02:45:47 +0000 (0:00:00.270) 0:00:27.103 ******** 2026-03-26 02:45:53.122522 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:45:53.122526 | orchestrator | 2026-03-26 02:45:53.122531 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-03-26 02:45:53.122535 | orchestrator | Thursday 26 March 2026 02:45:47 +0000 (0:00:00.211) 0:00:27.314 ******** 2026-03-26 02:45:53.122539 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2026-03-26 02:45:53.122544 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2026-03-26 02:45:53.122548 | orchestrator | 2026-03-26 02:45:53.122552 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-03-26 02:45:53.122556 | orchestrator | Thursday 26 March 2026 02:45:47 +0000 (0:00:00.179) 0:00:27.493 ******** 2026-03-26 02:45:53.122560 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:45:53.122565 | orchestrator | 2026-03-26 02:45:53.122569 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-03-26 02:45:53.122573 | orchestrator | Thursday 26 March 2026 02:45:47 +0000 (0:00:00.135) 0:00:27.629 ******** 2026-03-26 02:45:53.122577 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:45:53.122582 | orchestrator | 2026-03-26 02:45:53.122586 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-03-26 02:45:53.122590 | orchestrator | Thursday 26 March 2026 02:45:47 +0000 (0:00:00.141) 0:00:27.771 ******** 2026-03-26 02:45:53.122595 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:45:53.122599 | orchestrator | 2026-03-26 02:45:53.122604 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-03-26 02:45:53.122608 | orchestrator | Thursday 26 March 2026 02:45:48 +0000 (0:00:00.145) 0:00:27.916 ******** 2026-03-26 02:45:53.122612 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:45:53.122618 | orchestrator | 2026-03-26 02:45:53.122622 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-03-26 02:45:53.122627 | orchestrator | Thursday 26 March 2026 02:45:48 +0000 (0:00:00.130) 0:00:28.046 ******** 2026-03-26 02:45:53.122648 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a652979e-9f40-503a-bbc8-6de5e605991e'}}) 2026-03-26 02:45:53.122653 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b5eee7c3-8883-5bbe-be5a-75726e822543'}}) 2026-03-26 02:45:53.122658 | orchestrator | 2026-03-26 02:45:53.122663 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-03-26 02:45:53.122667 | orchestrator | Thursday 26 March 2026 02:45:48 +0000 (0:00:00.199) 0:00:28.246 ******** 2026-03-26 02:45:53.122672 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a652979e-9f40-503a-bbc8-6de5e605991e'}})  2026-03-26 02:45:53.122678 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b5eee7c3-8883-5bbe-be5a-75726e822543'}})  2026-03-26 02:45:53.122683 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:45:53.122687 | orchestrator | 2026-03-26 02:45:53.122691 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-03-26 02:45:53.122696 | orchestrator | Thursday 26 March 2026 02:45:48 +0000 (0:00:00.397) 0:00:28.643 ******** 2026-03-26 02:45:53.122700 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a652979e-9f40-503a-bbc8-6de5e605991e'}})  2026-03-26 02:45:53.122704 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b5eee7c3-8883-5bbe-be5a-75726e822543'}})  2026-03-26 02:45:53.122709 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:45:53.122713 | orchestrator | 2026-03-26 02:45:53.122718 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-03-26 02:45:53.122722 | orchestrator | Thursday 26 March 2026 02:45:48 +0000 (0:00:00.157) 0:00:28.800 ******** 2026-03-26 02:45:53.122726 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a652979e-9f40-503a-bbc8-6de5e605991e'}})  2026-03-26 02:45:53.122731 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b5eee7c3-8883-5bbe-be5a-75726e822543'}})  2026-03-26 02:45:53.122735 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:45:53.122740 | orchestrator | 2026-03-26 02:45:53.122744 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-03-26 02:45:53.122748 | orchestrator | Thursday 26 March 2026 02:45:49 +0000 (0:00:00.152) 0:00:28.953 ******** 2026-03-26 02:45:53.122752 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:45:53.122757 | orchestrator | 2026-03-26 02:45:53.122761 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-03-26 02:45:53.122765 | orchestrator | Thursday 26 March 2026 02:45:49 +0000 (0:00:00.138) 0:00:29.091 ******** 2026-03-26 02:45:53.122770 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:45:53.122774 | orchestrator | 2026-03-26 02:45:53.122778 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-03-26 02:45:53.122783 | orchestrator | Thursday 26 March 2026 02:45:49 +0000 (0:00:00.149) 0:00:29.241 ******** 2026-03-26 02:45:53.122797 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:45:53.122802 | orchestrator | 2026-03-26 02:45:53.122806 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-03-26 02:45:53.122811 | orchestrator | Thursday 26 March 2026 02:45:49 +0000 (0:00:00.143) 0:00:29.385 ******** 2026-03-26 02:45:53.122815 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:45:53.122819 | orchestrator | 2026-03-26 02:45:53.122824 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-03-26 02:45:53.122828 | orchestrator | Thursday 26 March 2026 02:45:49 +0000 (0:00:00.141) 0:00:29.526 ******** 2026-03-26 02:45:53.122836 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:45:53.122840 | orchestrator | 2026-03-26 02:45:53.122845 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-03-26 02:45:53.122849 | orchestrator | Thursday 26 March 2026 02:45:49 +0000 (0:00:00.153) 0:00:29.680 ******** 2026-03-26 02:45:53.122858 | orchestrator | ok: [testbed-node-4] => { 2026-03-26 02:45:53.122864 | orchestrator |  "ceph_osd_devices": { 2026-03-26 02:45:53.122872 | orchestrator |  "sdb": { 2026-03-26 02:45:53.122880 | orchestrator |  "osd_lvm_uuid": "a652979e-9f40-503a-bbc8-6de5e605991e" 2026-03-26 02:45:53.122887 | orchestrator |  }, 2026-03-26 02:45:53.122894 | orchestrator |  "sdc": { 2026-03-26 02:45:53.122901 | orchestrator |  "osd_lvm_uuid": "b5eee7c3-8883-5bbe-be5a-75726e822543" 2026-03-26 02:45:53.122909 | orchestrator |  } 2026-03-26 02:45:53.122916 | orchestrator |  } 2026-03-26 02:45:53.122923 | orchestrator | } 2026-03-26 02:45:53.122933 | orchestrator | 2026-03-26 02:45:53.122938 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-03-26 02:45:53.122943 | orchestrator | Thursday 26 March 2026 02:45:50 +0000 (0:00:00.147) 0:00:29.827 ******** 2026-03-26 02:45:53.122948 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:45:53.122953 | orchestrator | 2026-03-26 02:45:53.122958 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-03-26 02:45:53.122963 | orchestrator | Thursday 26 March 2026 02:45:50 +0000 (0:00:00.132) 0:00:29.960 ******** 2026-03-26 02:45:53.122968 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:45:53.122973 | orchestrator | 2026-03-26 02:45:53.122979 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-03-26 02:45:53.122984 | orchestrator | Thursday 26 March 2026 02:45:50 +0000 (0:00:00.140) 0:00:30.100 ******** 2026-03-26 02:45:53.122989 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:45:53.122994 | orchestrator | 2026-03-26 02:45:53.122999 | orchestrator | TASK [Print configuration data] ************************************************ 2026-03-26 02:45:53.123004 | orchestrator | Thursday 26 March 2026 02:45:50 +0000 (0:00:00.163) 0:00:30.264 ******** 2026-03-26 02:45:53.123009 | orchestrator | changed: [testbed-node-4] => { 2026-03-26 02:45:53.123014 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-03-26 02:45:53.123019 | orchestrator |  "ceph_osd_devices": { 2026-03-26 02:45:53.123024 | orchestrator |  "sdb": { 2026-03-26 02:45:53.123029 | orchestrator |  "osd_lvm_uuid": "a652979e-9f40-503a-bbc8-6de5e605991e" 2026-03-26 02:45:53.123034 | orchestrator |  }, 2026-03-26 02:45:53.123040 | orchestrator |  "sdc": { 2026-03-26 02:45:53.123045 | orchestrator |  "osd_lvm_uuid": "b5eee7c3-8883-5bbe-be5a-75726e822543" 2026-03-26 02:45:53.123050 | orchestrator |  } 2026-03-26 02:45:53.123055 | orchestrator |  }, 2026-03-26 02:45:53.123060 | orchestrator |  "lvm_volumes": [ 2026-03-26 02:45:53.123065 | orchestrator |  { 2026-03-26 02:45:53.123070 | orchestrator |  "data": "osd-block-a652979e-9f40-503a-bbc8-6de5e605991e", 2026-03-26 02:45:53.123075 | orchestrator |  "data_vg": "ceph-a652979e-9f40-503a-bbc8-6de5e605991e" 2026-03-26 02:45:53.123080 | orchestrator |  }, 2026-03-26 02:45:53.123085 | orchestrator |  { 2026-03-26 02:45:53.123090 | orchestrator |  "data": "osd-block-b5eee7c3-8883-5bbe-be5a-75726e822543", 2026-03-26 02:45:53.123133 | orchestrator |  "data_vg": "ceph-b5eee7c3-8883-5bbe-be5a-75726e822543" 2026-03-26 02:45:53.123140 | orchestrator |  } 2026-03-26 02:45:53.123145 | orchestrator |  ] 2026-03-26 02:45:53.123150 | orchestrator |  } 2026-03-26 02:45:53.123155 | orchestrator | } 2026-03-26 02:45:53.123160 | orchestrator | 2026-03-26 02:45:53.123165 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-03-26 02:45:53.123170 | orchestrator | Thursday 26 March 2026 02:45:50 +0000 (0:00:00.467) 0:00:30.731 ******** 2026-03-26 02:45:53.123175 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-03-26 02:45:53.123181 | orchestrator | 2026-03-26 02:45:53.123186 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-03-26 02:45:53.123191 | orchestrator | 2026-03-26 02:45:53.123196 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-26 02:45:53.123201 | orchestrator | Thursday 26 March 2026 02:45:52 +0000 (0:00:01.224) 0:00:31.956 ******** 2026-03-26 02:45:53.123212 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-03-26 02:45:53.123217 | orchestrator | 2026-03-26 02:45:53.123222 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-26 02:45:53.123227 | orchestrator | Thursday 26 March 2026 02:45:52 +0000 (0:00:00.289) 0:00:32.246 ******** 2026-03-26 02:45:53.123232 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:45:53.123237 | orchestrator | 2026-03-26 02:45:53.123242 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-26 02:45:53.123247 | orchestrator | Thursday 26 March 2026 02:45:52 +0000 (0:00:00.275) 0:00:32.521 ******** 2026-03-26 02:45:53.123252 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-03-26 02:45:53.123257 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-03-26 02:45:53.123263 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-03-26 02:45:53.123268 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-03-26 02:45:53.123273 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-03-26 02:45:53.123282 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-03-26 02:46:02.450839 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-03-26 02:46:02.450974 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-03-26 02:46:02.450997 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-03-26 02:46:02.451031 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-03-26 02:46:02.451047 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-03-26 02:46:02.451062 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-03-26 02:46:02.451077 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-03-26 02:46:02.451091 | orchestrator | 2026-03-26 02:46:02.451107 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-26 02:46:02.451253 | orchestrator | Thursday 26 March 2026 02:45:53 +0000 (0:00:00.401) 0:00:32.923 ******** 2026-03-26 02:46:02.451265 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:46:02.451275 | orchestrator | 2026-03-26 02:46:02.451285 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-26 02:46:02.451294 | orchestrator | Thursday 26 March 2026 02:45:53 +0000 (0:00:00.203) 0:00:33.126 ******** 2026-03-26 02:46:02.451303 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:46:02.451312 | orchestrator | 2026-03-26 02:46:02.451321 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-26 02:46:02.451330 | orchestrator | Thursday 26 March 2026 02:45:53 +0000 (0:00:00.198) 0:00:33.325 ******** 2026-03-26 02:46:02.451339 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:46:02.451350 | orchestrator | 2026-03-26 02:46:02.451360 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-26 02:46:02.451371 | orchestrator | Thursday 26 March 2026 02:45:53 +0000 (0:00:00.234) 0:00:33.559 ******** 2026-03-26 02:46:02.451381 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:46:02.451392 | orchestrator | 2026-03-26 02:46:02.451402 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-26 02:46:02.451413 | orchestrator | Thursday 26 March 2026 02:45:54 +0000 (0:00:00.703) 0:00:34.263 ******** 2026-03-26 02:46:02.451424 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:46:02.451435 | orchestrator | 2026-03-26 02:46:02.451445 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-26 02:46:02.451456 | orchestrator | Thursday 26 March 2026 02:45:54 +0000 (0:00:00.228) 0:00:34.491 ******** 2026-03-26 02:46:02.451487 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:46:02.451499 | orchestrator | 2026-03-26 02:46:02.451509 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-26 02:46:02.451520 | orchestrator | Thursday 26 March 2026 02:45:54 +0000 (0:00:00.219) 0:00:34.710 ******** 2026-03-26 02:46:02.451530 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:46:02.451540 | orchestrator | 2026-03-26 02:46:02.451550 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-26 02:46:02.451561 | orchestrator | Thursday 26 March 2026 02:45:55 +0000 (0:00:00.226) 0:00:34.937 ******** 2026-03-26 02:46:02.451571 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:46:02.451581 | orchestrator | 2026-03-26 02:46:02.451592 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-26 02:46:02.451602 | orchestrator | Thursday 26 March 2026 02:45:55 +0000 (0:00:00.205) 0:00:35.142 ******** 2026-03-26 02:46:02.451612 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_4fa924fa-33d9-43ce-b208-159d6f6ab539) 2026-03-26 02:46:02.451624 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_4fa924fa-33d9-43ce-b208-159d6f6ab539) 2026-03-26 02:46:02.451634 | orchestrator | 2026-03-26 02:46:02.451645 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-26 02:46:02.451655 | orchestrator | Thursday 26 March 2026 02:45:55 +0000 (0:00:00.444) 0:00:35.587 ******** 2026-03-26 02:46:02.451665 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_943c088c-5b56-4173-ab64-ec81e1cc816d) 2026-03-26 02:46:02.451676 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_943c088c-5b56-4173-ab64-ec81e1cc816d) 2026-03-26 02:46:02.451687 | orchestrator | 2026-03-26 02:46:02.451697 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-26 02:46:02.451708 | orchestrator | Thursday 26 March 2026 02:45:56 +0000 (0:00:00.502) 0:00:36.089 ******** 2026-03-26 02:46:02.451718 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_47760649-09e9-4ed8-8303-e5ee473a8102) 2026-03-26 02:46:02.451729 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_47760649-09e9-4ed8-8303-e5ee473a8102) 2026-03-26 02:46:02.451739 | orchestrator | 2026-03-26 02:46:02.451750 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-26 02:46:02.451760 | orchestrator | Thursday 26 March 2026 02:45:56 +0000 (0:00:00.453) 0:00:36.543 ******** 2026-03-26 02:46:02.451770 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_8ddd7966-84e6-4951-8a08-7b4fb4af2bd2) 2026-03-26 02:46:02.451779 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_8ddd7966-84e6-4951-8a08-7b4fb4af2bd2) 2026-03-26 02:46:02.451788 | orchestrator | 2026-03-26 02:46:02.451796 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-26 02:46:02.451805 | orchestrator | Thursday 26 March 2026 02:45:57 +0000 (0:00:00.511) 0:00:37.055 ******** 2026-03-26 02:46:02.451814 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-26 02:46:02.451823 | orchestrator | 2026-03-26 02:46:02.451832 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-26 02:46:02.451860 | orchestrator | Thursday 26 March 2026 02:45:57 +0000 (0:00:00.407) 0:00:37.462 ******** 2026-03-26 02:46:02.451870 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-03-26 02:46:02.451879 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-03-26 02:46:02.451888 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-03-26 02:46:02.451904 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-03-26 02:46:02.451913 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-03-26 02:46:02.451922 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-03-26 02:46:02.451939 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-03-26 02:46:02.451947 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-03-26 02:46:02.451956 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-03-26 02:46:02.451965 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-03-26 02:46:02.451974 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-03-26 02:46:02.451983 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-03-26 02:46:02.451991 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-03-26 02:46:02.452000 | orchestrator | 2026-03-26 02:46:02.452009 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-26 02:46:02.452018 | orchestrator | Thursday 26 March 2026 02:45:58 +0000 (0:00:00.682) 0:00:38.145 ******** 2026-03-26 02:46:02.452027 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:46:02.452036 | orchestrator | 2026-03-26 02:46:02.452045 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-26 02:46:02.452060 | orchestrator | Thursday 26 March 2026 02:45:58 +0000 (0:00:00.204) 0:00:38.349 ******** 2026-03-26 02:46:02.452074 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:46:02.452088 | orchestrator | 2026-03-26 02:46:02.452105 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-26 02:46:02.452147 | orchestrator | Thursday 26 March 2026 02:45:58 +0000 (0:00:00.228) 0:00:38.578 ******** 2026-03-26 02:46:02.452163 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:46:02.452178 | orchestrator | 2026-03-26 02:46:02.452193 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-26 02:46:02.452207 | orchestrator | Thursday 26 March 2026 02:45:58 +0000 (0:00:00.211) 0:00:38.789 ******** 2026-03-26 02:46:02.452222 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:46:02.452237 | orchestrator | 2026-03-26 02:46:02.452252 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-26 02:46:02.452267 | orchestrator | Thursday 26 March 2026 02:45:59 +0000 (0:00:00.253) 0:00:39.043 ******** 2026-03-26 02:46:02.452282 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:46:02.452297 | orchestrator | 2026-03-26 02:46:02.452312 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-26 02:46:02.452325 | orchestrator | Thursday 26 March 2026 02:45:59 +0000 (0:00:00.207) 0:00:39.251 ******** 2026-03-26 02:46:02.452334 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:46:02.452343 | orchestrator | 2026-03-26 02:46:02.452351 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-26 02:46:02.452360 | orchestrator | Thursday 26 March 2026 02:45:59 +0000 (0:00:00.227) 0:00:39.478 ******** 2026-03-26 02:46:02.452369 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:46:02.452377 | orchestrator | 2026-03-26 02:46:02.452386 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-26 02:46:02.452395 | orchestrator | Thursday 26 March 2026 02:45:59 +0000 (0:00:00.219) 0:00:39.698 ******** 2026-03-26 02:46:02.452403 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:46:02.452412 | orchestrator | 2026-03-26 02:46:02.452421 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-26 02:46:02.452430 | orchestrator | Thursday 26 March 2026 02:46:00 +0000 (0:00:00.236) 0:00:39.934 ******** 2026-03-26 02:46:02.452438 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-03-26 02:46:02.452447 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-03-26 02:46:02.452456 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-03-26 02:46:02.452465 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-03-26 02:46:02.452474 | orchestrator | 2026-03-26 02:46:02.452494 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-26 02:46:02.452503 | orchestrator | Thursday 26 March 2026 02:46:01 +0000 (0:00:00.952) 0:00:40.887 ******** 2026-03-26 02:46:02.452512 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:46:02.452521 | orchestrator | 2026-03-26 02:46:02.452529 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-26 02:46:02.452538 | orchestrator | Thursday 26 March 2026 02:46:01 +0000 (0:00:00.188) 0:00:41.075 ******** 2026-03-26 02:46:02.452547 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:46:02.452555 | orchestrator | 2026-03-26 02:46:02.452564 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-26 02:46:02.452573 | orchestrator | Thursday 26 March 2026 02:46:01 +0000 (0:00:00.194) 0:00:41.269 ******** 2026-03-26 02:46:02.452581 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:46:02.452590 | orchestrator | 2026-03-26 02:46:02.452599 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-26 02:46:02.452607 | orchestrator | Thursday 26 March 2026 02:46:02 +0000 (0:00:00.762) 0:00:42.032 ******** 2026-03-26 02:46:02.452616 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:46:02.452625 | orchestrator | 2026-03-26 02:46:02.452642 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-03-26 02:46:06.825888 | orchestrator | Thursday 26 March 2026 02:46:02 +0000 (0:00:00.220) 0:00:42.252 ******** 2026-03-26 02:46:06.826009 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2026-03-26 02:46:06.826079 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2026-03-26 02:46:06.826088 | orchestrator | 2026-03-26 02:46:06.826097 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-03-26 02:46:06.826181 | orchestrator | Thursday 26 March 2026 02:46:02 +0000 (0:00:00.197) 0:00:42.450 ******** 2026-03-26 02:46:06.826194 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:46:06.826202 | orchestrator | 2026-03-26 02:46:06.826210 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-03-26 02:46:06.826217 | orchestrator | Thursday 26 March 2026 02:46:02 +0000 (0:00:00.141) 0:00:42.591 ******** 2026-03-26 02:46:06.826225 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:46:06.826233 | orchestrator | 2026-03-26 02:46:06.826240 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-03-26 02:46:06.826248 | orchestrator | Thursday 26 March 2026 02:46:02 +0000 (0:00:00.148) 0:00:42.740 ******** 2026-03-26 02:46:06.826255 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:46:06.826262 | orchestrator | 2026-03-26 02:46:06.826269 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-03-26 02:46:06.826277 | orchestrator | Thursday 26 March 2026 02:46:03 +0000 (0:00:00.133) 0:00:42.874 ******** 2026-03-26 02:46:06.826284 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:46:06.826292 | orchestrator | 2026-03-26 02:46:06.826300 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-03-26 02:46:06.826307 | orchestrator | Thursday 26 March 2026 02:46:03 +0000 (0:00:00.168) 0:00:43.043 ******** 2026-03-26 02:46:06.826315 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '83c4def8-4703-5f7c-9549-7666ff9f2b66'}}) 2026-03-26 02:46:06.826323 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1fd8de68-da37-5e01-9bf2-5a04fcdcd771'}}) 2026-03-26 02:46:06.826330 | orchestrator | 2026-03-26 02:46:06.826337 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-03-26 02:46:06.826345 | orchestrator | Thursday 26 March 2026 02:46:03 +0000 (0:00:00.188) 0:00:43.231 ******** 2026-03-26 02:46:06.826353 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '83c4def8-4703-5f7c-9549-7666ff9f2b66'}})  2026-03-26 02:46:06.826361 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1fd8de68-da37-5e01-9bf2-5a04fcdcd771'}})  2026-03-26 02:46:06.826369 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:46:06.826393 | orchestrator | 2026-03-26 02:46:06.826401 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-03-26 02:46:06.826408 | orchestrator | Thursday 26 March 2026 02:46:03 +0000 (0:00:00.152) 0:00:43.383 ******** 2026-03-26 02:46:06.826415 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '83c4def8-4703-5f7c-9549-7666ff9f2b66'}})  2026-03-26 02:46:06.826423 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1fd8de68-da37-5e01-9bf2-5a04fcdcd771'}})  2026-03-26 02:46:06.826432 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:46:06.826440 | orchestrator | 2026-03-26 02:46:06.826449 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-03-26 02:46:06.826457 | orchestrator | Thursday 26 March 2026 02:46:03 +0000 (0:00:00.179) 0:00:43.562 ******** 2026-03-26 02:46:06.826465 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '83c4def8-4703-5f7c-9549-7666ff9f2b66'}})  2026-03-26 02:46:06.826474 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1fd8de68-da37-5e01-9bf2-5a04fcdcd771'}})  2026-03-26 02:46:06.826482 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:46:06.826491 | orchestrator | 2026-03-26 02:46:06.826499 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-03-26 02:46:06.826508 | orchestrator | Thursday 26 March 2026 02:46:03 +0000 (0:00:00.155) 0:00:43.717 ******** 2026-03-26 02:46:06.826517 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:46:06.826524 | orchestrator | 2026-03-26 02:46:06.826531 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-03-26 02:46:06.826538 | orchestrator | Thursday 26 March 2026 02:46:04 +0000 (0:00:00.170) 0:00:43.888 ******** 2026-03-26 02:46:06.826546 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:46:06.826553 | orchestrator | 2026-03-26 02:46:06.826560 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-03-26 02:46:06.826567 | orchestrator | Thursday 26 March 2026 02:46:04 +0000 (0:00:00.393) 0:00:44.282 ******** 2026-03-26 02:46:06.826574 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:46:06.826582 | orchestrator | 2026-03-26 02:46:06.826589 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-03-26 02:46:06.826596 | orchestrator | Thursday 26 March 2026 02:46:04 +0000 (0:00:00.153) 0:00:44.435 ******** 2026-03-26 02:46:06.826603 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:46:06.826611 | orchestrator | 2026-03-26 02:46:06.826618 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-03-26 02:46:06.826625 | orchestrator | Thursday 26 March 2026 02:46:04 +0000 (0:00:00.152) 0:00:44.588 ******** 2026-03-26 02:46:06.826632 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:46:06.826638 | orchestrator | 2026-03-26 02:46:06.826645 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-03-26 02:46:06.826652 | orchestrator | Thursday 26 March 2026 02:46:04 +0000 (0:00:00.134) 0:00:44.723 ******** 2026-03-26 02:46:06.826659 | orchestrator | ok: [testbed-node-5] => { 2026-03-26 02:46:06.826665 | orchestrator |  "ceph_osd_devices": { 2026-03-26 02:46:06.826672 | orchestrator |  "sdb": { 2026-03-26 02:46:06.826695 | orchestrator |  "osd_lvm_uuid": "83c4def8-4703-5f7c-9549-7666ff9f2b66" 2026-03-26 02:46:06.826703 | orchestrator |  }, 2026-03-26 02:46:06.826710 | orchestrator |  "sdc": { 2026-03-26 02:46:06.826716 | orchestrator |  "osd_lvm_uuid": "1fd8de68-da37-5e01-9bf2-5a04fcdcd771" 2026-03-26 02:46:06.826724 | orchestrator |  } 2026-03-26 02:46:06.826731 | orchestrator |  } 2026-03-26 02:46:06.826737 | orchestrator | } 2026-03-26 02:46:06.826759 | orchestrator | 2026-03-26 02:46:06.826772 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-03-26 02:46:06.826779 | orchestrator | Thursday 26 March 2026 02:46:05 +0000 (0:00:00.156) 0:00:44.879 ******** 2026-03-26 02:46:06.826786 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:46:06.826798 | orchestrator | 2026-03-26 02:46:06.826805 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-03-26 02:46:06.826812 | orchestrator | Thursday 26 March 2026 02:46:05 +0000 (0:00:00.145) 0:00:45.025 ******** 2026-03-26 02:46:06.826818 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:46:06.826825 | orchestrator | 2026-03-26 02:46:06.826832 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-03-26 02:46:06.826839 | orchestrator | Thursday 26 March 2026 02:46:05 +0000 (0:00:00.145) 0:00:45.171 ******** 2026-03-26 02:46:06.826845 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:46:06.826852 | orchestrator | 2026-03-26 02:46:06.826859 | orchestrator | TASK [Print configuration data] ************************************************ 2026-03-26 02:46:06.826866 | orchestrator | Thursday 26 March 2026 02:46:05 +0000 (0:00:00.139) 0:00:45.310 ******** 2026-03-26 02:46:06.826872 | orchestrator | changed: [testbed-node-5] => { 2026-03-26 02:46:06.826881 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-03-26 02:46:06.826892 | orchestrator |  "ceph_osd_devices": { 2026-03-26 02:46:06.826904 | orchestrator |  "sdb": { 2026-03-26 02:46:06.826915 | orchestrator |  "osd_lvm_uuid": "83c4def8-4703-5f7c-9549-7666ff9f2b66" 2026-03-26 02:46:06.826926 | orchestrator |  }, 2026-03-26 02:46:06.826936 | orchestrator |  "sdc": { 2026-03-26 02:46:06.826947 | orchestrator |  "osd_lvm_uuid": "1fd8de68-da37-5e01-9bf2-5a04fcdcd771" 2026-03-26 02:46:06.826957 | orchestrator |  } 2026-03-26 02:46:06.826967 | orchestrator |  }, 2026-03-26 02:46:06.826977 | orchestrator |  "lvm_volumes": [ 2026-03-26 02:46:06.826988 | orchestrator |  { 2026-03-26 02:46:06.826999 | orchestrator |  "data": "osd-block-83c4def8-4703-5f7c-9549-7666ff9f2b66", 2026-03-26 02:46:06.827011 | orchestrator |  "data_vg": "ceph-83c4def8-4703-5f7c-9549-7666ff9f2b66" 2026-03-26 02:46:06.827022 | orchestrator |  }, 2026-03-26 02:46:06.827034 | orchestrator |  { 2026-03-26 02:46:06.827042 | orchestrator |  "data": "osd-block-1fd8de68-da37-5e01-9bf2-5a04fcdcd771", 2026-03-26 02:46:06.827048 | orchestrator |  "data_vg": "ceph-1fd8de68-da37-5e01-9bf2-5a04fcdcd771" 2026-03-26 02:46:06.827055 | orchestrator |  } 2026-03-26 02:46:06.827062 | orchestrator |  ] 2026-03-26 02:46:06.827069 | orchestrator |  } 2026-03-26 02:46:06.827076 | orchestrator | } 2026-03-26 02:46:06.827082 | orchestrator | 2026-03-26 02:46:06.827089 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-03-26 02:46:06.827096 | orchestrator | Thursday 26 March 2026 02:46:05 +0000 (0:00:00.222) 0:00:45.533 ******** 2026-03-26 02:46:06.827103 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-03-26 02:46:06.827109 | orchestrator | 2026-03-26 02:46:06.827160 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-26 02:46:06.827168 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-26 02:46:06.827176 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-26 02:46:06.827183 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-26 02:46:06.827190 | orchestrator | 2026-03-26 02:46:06.827196 | orchestrator | 2026-03-26 02:46:06.827203 | orchestrator | 2026-03-26 02:46:06.827210 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-26 02:46:06.827217 | orchestrator | Thursday 26 March 2026 02:46:06 +0000 (0:00:01.078) 0:00:46.612 ******** 2026-03-26 02:46:06.827223 | orchestrator | =============================================================================== 2026-03-26 02:46:06.827230 | orchestrator | Write configuration file ------------------------------------------------ 4.30s 2026-03-26 02:46:06.827243 | orchestrator | Add known links to the list of available block devices ------------------ 1.61s 2026-03-26 02:46:06.827250 | orchestrator | Add known partitions to the list of available block devices ------------- 1.55s 2026-03-26 02:46:06.827257 | orchestrator | Add known links to the list of available block devices ------------------ 1.09s 2026-03-26 02:46:06.827264 | orchestrator | Add known links to the list of available block devices ------------------ 0.97s 2026-03-26 02:46:06.827270 | orchestrator | Add known partitions to the list of available block devices ------------- 0.95s 2026-03-26 02:46:06.827277 | orchestrator | Print configuration data ------------------------------------------------ 0.92s 2026-03-26 02:46:06.827284 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.84s 2026-03-26 02:46:06.827291 | orchestrator | Get initial list of available block devices ----------------------------- 0.79s 2026-03-26 02:46:06.827297 | orchestrator | Add known partitions to the list of available block devices ------------- 0.78s 2026-03-26 02:46:06.827304 | orchestrator | Add known partitions to the list of available block devices ------------- 0.76s 2026-03-26 02:46:06.827311 | orchestrator | Add known links to the list of available block devices ------------------ 0.74s 2026-03-26 02:46:06.827318 | orchestrator | Add known partitions to the list of available block devices ------------- 0.73s 2026-03-26 02:46:06.827332 | orchestrator | Add known partitions to the list of available block devices ------------- 0.73s 2026-03-26 02:46:07.393306 | orchestrator | Add known links to the list of available block devices ------------------ 0.72s 2026-03-26 02:46:07.393380 | orchestrator | Generate lvm_volumes structure (block + db) ----------------------------- 0.71s 2026-03-26 02:46:07.393387 | orchestrator | Add known links to the list of available block devices ------------------ 0.71s 2026-03-26 02:46:07.393405 | orchestrator | Add known links to the list of available block devices ------------------ 0.70s 2026-03-26 02:46:07.393410 | orchestrator | Add known links to the list of available block devices ------------------ 0.70s 2026-03-26 02:46:07.393414 | orchestrator | Print ceph_osd_devices -------------------------------------------------- 0.70s 2026-03-26 02:46:30.148782 | orchestrator | 2026-03-26 02:46:30 | INFO  | Task bfb64ee4-2646-4665-9bf0-88d97a04f762 (sync inventory) is running in background. Output coming soon. 2026-03-26 02:47:01.346613 | orchestrator | 2026-03-26 02:46:31 | INFO  | Starting group_vars file reorganization 2026-03-26 02:47:01.346757 | orchestrator | 2026-03-26 02:46:31 | INFO  | Moved 0 file(s) to their respective directories 2026-03-26 02:47:01.346774 | orchestrator | 2026-03-26 02:46:31 | INFO  | Group_vars file reorganization completed 2026-03-26 02:47:01.346782 | orchestrator | 2026-03-26 02:46:34 | INFO  | Starting variable preparation from inventory 2026-03-26 02:47:01.346789 | orchestrator | 2026-03-26 02:46:37 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-03-26 02:47:01.346796 | orchestrator | 2026-03-26 02:46:37 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-03-26 02:47:01.346802 | orchestrator | 2026-03-26 02:46:37 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-03-26 02:47:01.346808 | orchestrator | 2026-03-26 02:46:37 | INFO  | 3 file(s) written, 6 host(s) processed 2026-03-26 02:47:01.346815 | orchestrator | 2026-03-26 02:46:37 | INFO  | Variable preparation completed 2026-03-26 02:47:01.346821 | orchestrator | 2026-03-26 02:46:39 | INFO  | Starting inventory overwrite handling 2026-03-26 02:47:01.346827 | orchestrator | 2026-03-26 02:46:39 | INFO  | Handling group overwrites in 99-overwrite 2026-03-26 02:47:01.346833 | orchestrator | 2026-03-26 02:46:39 | INFO  | Removing group frr:children from 60-generic 2026-03-26 02:47:01.346839 | orchestrator | 2026-03-26 02:46:39 | INFO  | Removing group netbird:children from 50-infrastructure 2026-03-26 02:47:01.346845 | orchestrator | 2026-03-26 02:46:39 | INFO  | Removing group ceph-rgw from 50-ceph 2026-03-26 02:47:01.346888 | orchestrator | 2026-03-26 02:46:39 | INFO  | Removing group ceph-mds from 50-ceph 2026-03-26 02:47:01.346896 | orchestrator | 2026-03-26 02:46:39 | INFO  | Handling group overwrites in 20-roles 2026-03-26 02:47:01.346902 | orchestrator | 2026-03-26 02:46:39 | INFO  | Removing group k3s_node from 50-infrastructure 2026-03-26 02:47:01.346909 | orchestrator | 2026-03-26 02:46:39 | INFO  | Removed 5 group(s) in total 2026-03-26 02:47:01.346915 | orchestrator | 2026-03-26 02:46:39 | INFO  | Inventory overwrite handling completed 2026-03-26 02:47:01.346921 | orchestrator | 2026-03-26 02:46:41 | INFO  | Starting merge of inventory files 2026-03-26 02:47:01.346927 | orchestrator | 2026-03-26 02:46:41 | INFO  | Inventory files merged successfully 2026-03-26 02:47:01.346933 | orchestrator | 2026-03-26 02:46:46 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-03-26 02:47:01.346939 | orchestrator | 2026-03-26 02:46:59 | INFO  | Successfully wrote ClusterShell configuration 2026-03-26 02:47:01.346946 | orchestrator | [master 7661ed7] 2026-03-26-02-47 2026-03-26 02:47:01.346953 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2026-03-26 02:47:03.953854 | orchestrator | 2026-03-26 02:47:03 | INFO  | Task 84ec70a8-97fe-4cd7-9d64-7d2214c4453b (ceph-create-lvm-devices) was prepared for execution. 2026-03-26 02:47:03.953974 | orchestrator | 2026-03-26 02:47:03 | INFO  | It takes a moment until task 84ec70a8-97fe-4cd7-9d64-7d2214c4453b (ceph-create-lvm-devices) has been started and output is visible here. 2026-03-26 02:47:17.318392 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-26 02:47:17.318507 | orchestrator | 2.16.14 2026-03-26 02:47:17.318522 | orchestrator | 2026-03-26 02:47:17.318534 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-03-26 02:47:17.318546 | orchestrator | 2026-03-26 02:47:17.318556 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-26 02:47:17.318567 | orchestrator | Thursday 26 March 2026 02:47:08 +0000 (0:00:00.339) 0:00:00.339 ******** 2026-03-26 02:47:17.318578 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-26 02:47:17.318588 | orchestrator | 2026-03-26 02:47:17.318598 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-26 02:47:17.318608 | orchestrator | Thursday 26 March 2026 02:47:08 +0000 (0:00:00.268) 0:00:00.608 ******** 2026-03-26 02:47:17.318618 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:47:17.318628 | orchestrator | 2026-03-26 02:47:17.318638 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-26 02:47:17.318648 | orchestrator | Thursday 26 March 2026 02:47:09 +0000 (0:00:00.308) 0:00:00.916 ******** 2026-03-26 02:47:17.318658 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-03-26 02:47:17.318668 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-03-26 02:47:17.318694 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-03-26 02:47:17.318705 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-03-26 02:47:17.318770 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-03-26 02:47:17.318781 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-03-26 02:47:17.318791 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-03-26 02:47:17.318801 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-03-26 02:47:17.318811 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-03-26 02:47:17.318822 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-03-26 02:47:17.318856 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-03-26 02:47:17.318866 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-03-26 02:47:17.318876 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-03-26 02:47:17.318886 | orchestrator | 2026-03-26 02:47:17.318896 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-26 02:47:17.318906 | orchestrator | Thursday 26 March 2026 02:47:09 +0000 (0:00:00.593) 0:00:01.510 ******** 2026-03-26 02:47:17.318916 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:47:17.318929 | orchestrator | 2026-03-26 02:47:17.318940 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-26 02:47:17.318951 | orchestrator | Thursday 26 March 2026 02:47:10 +0000 (0:00:00.251) 0:00:01.762 ******** 2026-03-26 02:47:17.318962 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:47:17.318974 | orchestrator | 2026-03-26 02:47:17.318984 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-26 02:47:17.318996 | orchestrator | Thursday 26 March 2026 02:47:10 +0000 (0:00:00.234) 0:00:01.997 ******** 2026-03-26 02:47:17.319007 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:47:17.319022 | orchestrator | 2026-03-26 02:47:17.319038 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-26 02:47:17.319055 | orchestrator | Thursday 26 March 2026 02:47:10 +0000 (0:00:00.243) 0:00:02.240 ******** 2026-03-26 02:47:17.319072 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:47:17.319087 | orchestrator | 2026-03-26 02:47:17.319104 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-26 02:47:17.319121 | orchestrator | Thursday 26 March 2026 02:47:10 +0000 (0:00:00.245) 0:00:02.486 ******** 2026-03-26 02:47:17.319137 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:47:17.319152 | orchestrator | 2026-03-26 02:47:17.319166 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-26 02:47:17.319183 | orchestrator | Thursday 26 March 2026 02:47:10 +0000 (0:00:00.225) 0:00:02.711 ******** 2026-03-26 02:47:17.319200 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:47:17.319246 | orchestrator | 2026-03-26 02:47:17.319262 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-26 02:47:17.319280 | orchestrator | Thursday 26 March 2026 02:47:11 +0000 (0:00:00.236) 0:00:02.948 ******** 2026-03-26 02:47:17.319297 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:47:17.319313 | orchestrator | 2026-03-26 02:47:17.319329 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-26 02:47:17.319346 | orchestrator | Thursday 26 March 2026 02:47:11 +0000 (0:00:00.229) 0:00:03.177 ******** 2026-03-26 02:47:17.319362 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:47:17.319379 | orchestrator | 2026-03-26 02:47:17.319394 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-26 02:47:17.319412 | orchestrator | Thursday 26 March 2026 02:47:11 +0000 (0:00:00.225) 0:00:03.403 ******** 2026-03-26 02:47:17.319428 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_ce600cf2-62c4-44aa-8248-5535335c6519) 2026-03-26 02:47:17.319445 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_ce600cf2-62c4-44aa-8248-5535335c6519) 2026-03-26 02:47:17.319461 | orchestrator | 2026-03-26 02:47:17.319477 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-26 02:47:17.319518 | orchestrator | Thursday 26 March 2026 02:47:12 +0000 (0:00:00.713) 0:00:04.116 ******** 2026-03-26 02:47:17.319536 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_d11e4e4a-db1d-44df-8da9-5de7e993dd80) 2026-03-26 02:47:17.319551 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_d11e4e4a-db1d-44df-8da9-5de7e993dd80) 2026-03-26 02:47:17.319565 | orchestrator | 2026-03-26 02:47:17.319582 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-26 02:47:17.319616 | orchestrator | Thursday 26 March 2026 02:47:13 +0000 (0:00:00.717) 0:00:04.834 ******** 2026-03-26 02:47:17.319634 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_863ba5d2-7e2f-4393-95a6-83543745d331) 2026-03-26 02:47:17.319650 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_863ba5d2-7e2f-4393-95a6-83543745d331) 2026-03-26 02:47:17.319667 | orchestrator | 2026-03-26 02:47:17.319680 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-26 02:47:17.319696 | orchestrator | Thursday 26 March 2026 02:47:14 +0000 (0:00:00.979) 0:00:05.813 ******** 2026-03-26 02:47:17.319711 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_2dae49df-17cb-48b5-9940-ec5e7ec792d8) 2026-03-26 02:47:17.319739 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_2dae49df-17cb-48b5-9940-ec5e7ec792d8) 2026-03-26 02:47:17.319756 | orchestrator | 2026-03-26 02:47:17.319769 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-26 02:47:17.319780 | orchestrator | Thursday 26 March 2026 02:47:14 +0000 (0:00:00.477) 0:00:06.290 ******** 2026-03-26 02:47:17.319790 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-26 02:47:17.319799 | orchestrator | 2026-03-26 02:47:17.319809 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-26 02:47:17.319819 | orchestrator | Thursday 26 March 2026 02:47:14 +0000 (0:00:00.334) 0:00:06.625 ******** 2026-03-26 02:47:17.319829 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-03-26 02:47:17.319838 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-03-26 02:47:17.319848 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-03-26 02:47:17.319858 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-03-26 02:47:17.319868 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-03-26 02:47:17.319877 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-03-26 02:47:17.319887 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-03-26 02:47:17.319897 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-03-26 02:47:17.319906 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-03-26 02:47:17.319916 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-03-26 02:47:17.319926 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-03-26 02:47:17.319935 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-03-26 02:47:17.319945 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-03-26 02:47:17.319954 | orchestrator | 2026-03-26 02:47:17.319964 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-26 02:47:17.319974 | orchestrator | Thursday 26 March 2026 02:47:15 +0000 (0:00:00.461) 0:00:07.087 ******** 2026-03-26 02:47:17.319984 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:47:17.319993 | orchestrator | 2026-03-26 02:47:17.320003 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-26 02:47:17.320013 | orchestrator | Thursday 26 March 2026 02:47:15 +0000 (0:00:00.231) 0:00:07.318 ******** 2026-03-26 02:47:17.320023 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:47:17.320033 | orchestrator | 2026-03-26 02:47:17.320042 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-26 02:47:17.320052 | orchestrator | Thursday 26 March 2026 02:47:15 +0000 (0:00:00.218) 0:00:07.537 ******** 2026-03-26 02:47:17.320061 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:47:17.320079 | orchestrator | 2026-03-26 02:47:17.320089 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-26 02:47:17.320098 | orchestrator | Thursday 26 March 2026 02:47:16 +0000 (0:00:00.212) 0:00:07.750 ******** 2026-03-26 02:47:17.320108 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:47:17.320118 | orchestrator | 2026-03-26 02:47:17.320128 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-26 02:47:17.320137 | orchestrator | Thursday 26 March 2026 02:47:16 +0000 (0:00:00.214) 0:00:07.964 ******** 2026-03-26 02:47:17.320148 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:47:17.320165 | orchestrator | 2026-03-26 02:47:17.320180 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-26 02:47:17.320196 | orchestrator | Thursday 26 March 2026 02:47:16 +0000 (0:00:00.203) 0:00:08.168 ******** 2026-03-26 02:47:17.320245 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:47:17.320260 | orchestrator | 2026-03-26 02:47:17.320276 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-26 02:47:17.320293 | orchestrator | Thursday 26 March 2026 02:47:17 +0000 (0:00:00.670) 0:00:08.839 ******** 2026-03-26 02:47:17.320309 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:47:17.320326 | orchestrator | 2026-03-26 02:47:17.320356 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-26 02:47:25.774281 | orchestrator | Thursday 26 March 2026 02:47:17 +0000 (0:00:00.209) 0:00:09.048 ******** 2026-03-26 02:47:25.774416 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:47:25.774446 | orchestrator | 2026-03-26 02:47:25.774465 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-26 02:47:25.774485 | orchestrator | Thursday 26 March 2026 02:47:17 +0000 (0:00:00.211) 0:00:09.259 ******** 2026-03-26 02:47:25.774503 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-03-26 02:47:25.774522 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-03-26 02:47:25.774542 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-03-26 02:47:25.774560 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-03-26 02:47:25.774579 | orchestrator | 2026-03-26 02:47:25.774600 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-26 02:47:25.774620 | orchestrator | Thursday 26 March 2026 02:47:18 +0000 (0:00:00.731) 0:00:09.991 ******** 2026-03-26 02:47:25.774638 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:47:25.774658 | orchestrator | 2026-03-26 02:47:25.774676 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-26 02:47:25.774697 | orchestrator | Thursday 26 March 2026 02:47:18 +0000 (0:00:00.247) 0:00:10.238 ******** 2026-03-26 02:47:25.774737 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:47:25.774772 | orchestrator | 2026-03-26 02:47:25.774814 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-26 02:47:25.774835 | orchestrator | Thursday 26 March 2026 02:47:18 +0000 (0:00:00.241) 0:00:10.480 ******** 2026-03-26 02:47:25.774855 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:47:25.774873 | orchestrator | 2026-03-26 02:47:25.774890 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-26 02:47:25.774909 | orchestrator | Thursday 26 March 2026 02:47:18 +0000 (0:00:00.229) 0:00:10.709 ******** 2026-03-26 02:47:25.774927 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:47:25.774946 | orchestrator | 2026-03-26 02:47:25.774966 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-03-26 02:47:25.774983 | orchestrator | Thursday 26 March 2026 02:47:19 +0000 (0:00:00.238) 0:00:10.948 ******** 2026-03-26 02:47:25.775000 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:47:25.775017 | orchestrator | 2026-03-26 02:47:25.775034 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-03-26 02:47:25.775051 | orchestrator | Thursday 26 March 2026 02:47:19 +0000 (0:00:00.154) 0:00:11.102 ******** 2026-03-26 02:47:25.775071 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '93e8c9a2-b6ff-5fe0-a79e-2922336c3e0a'}}) 2026-03-26 02:47:25.775124 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'e2623153-bc41-510f-8884-ef957bb96082'}}) 2026-03-26 02:47:25.775144 | orchestrator | 2026-03-26 02:47:25.775163 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-03-26 02:47:25.775182 | orchestrator | Thursday 26 March 2026 02:47:19 +0000 (0:00:00.217) 0:00:11.320 ******** 2026-03-26 02:47:25.775202 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-93e8c9a2-b6ff-5fe0-a79e-2922336c3e0a', 'data_vg': 'ceph-93e8c9a2-b6ff-5fe0-a79e-2922336c3e0a'}) 2026-03-26 02:47:25.775290 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-e2623153-bc41-510f-8884-ef957bb96082', 'data_vg': 'ceph-e2623153-bc41-510f-8884-ef957bb96082'}) 2026-03-26 02:47:25.775310 | orchestrator | 2026-03-26 02:47:25.775328 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-03-26 02:47:25.775346 | orchestrator | Thursday 26 March 2026 02:47:21 +0000 (0:00:02.059) 0:00:13.379 ******** 2026-03-26 02:47:25.775358 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-93e8c9a2-b6ff-5fe0-a79e-2922336c3e0a', 'data_vg': 'ceph-93e8c9a2-b6ff-5fe0-a79e-2922336c3e0a'})  2026-03-26 02:47:25.775371 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e2623153-bc41-510f-8884-ef957bb96082', 'data_vg': 'ceph-e2623153-bc41-510f-8884-ef957bb96082'})  2026-03-26 02:47:25.775382 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:47:25.775393 | orchestrator | 2026-03-26 02:47:25.775404 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-03-26 02:47:25.775415 | orchestrator | Thursday 26 March 2026 02:47:22 +0000 (0:00:00.384) 0:00:13.764 ******** 2026-03-26 02:47:25.775426 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-93e8c9a2-b6ff-5fe0-a79e-2922336c3e0a', 'data_vg': 'ceph-93e8c9a2-b6ff-5fe0-a79e-2922336c3e0a'}) 2026-03-26 02:47:25.775437 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-e2623153-bc41-510f-8884-ef957bb96082', 'data_vg': 'ceph-e2623153-bc41-510f-8884-ef957bb96082'}) 2026-03-26 02:47:25.775448 | orchestrator | 2026-03-26 02:47:25.775459 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-03-26 02:47:25.775470 | orchestrator | Thursday 26 March 2026 02:47:23 +0000 (0:00:01.510) 0:00:15.275 ******** 2026-03-26 02:47:25.775481 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-93e8c9a2-b6ff-5fe0-a79e-2922336c3e0a', 'data_vg': 'ceph-93e8c9a2-b6ff-5fe0-a79e-2922336c3e0a'})  2026-03-26 02:47:25.775493 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e2623153-bc41-510f-8884-ef957bb96082', 'data_vg': 'ceph-e2623153-bc41-510f-8884-ef957bb96082'})  2026-03-26 02:47:25.775504 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:47:25.775515 | orchestrator | 2026-03-26 02:47:25.775525 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-03-26 02:47:25.775536 | orchestrator | Thursday 26 March 2026 02:47:23 +0000 (0:00:00.154) 0:00:15.430 ******** 2026-03-26 02:47:25.775572 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:47:25.775584 | orchestrator | 2026-03-26 02:47:25.775595 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-03-26 02:47:25.775606 | orchestrator | Thursday 26 March 2026 02:47:23 +0000 (0:00:00.157) 0:00:15.587 ******** 2026-03-26 02:47:25.775619 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-93e8c9a2-b6ff-5fe0-a79e-2922336c3e0a', 'data_vg': 'ceph-93e8c9a2-b6ff-5fe0-a79e-2922336c3e0a'})  2026-03-26 02:47:25.775638 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e2623153-bc41-510f-8884-ef957bb96082', 'data_vg': 'ceph-e2623153-bc41-510f-8884-ef957bb96082'})  2026-03-26 02:47:25.775657 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:47:25.775675 | orchestrator | 2026-03-26 02:47:25.775693 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-03-26 02:47:25.775711 | orchestrator | Thursday 26 March 2026 02:47:24 +0000 (0:00:00.175) 0:00:15.763 ******** 2026-03-26 02:47:25.775747 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:47:25.775765 | orchestrator | 2026-03-26 02:47:25.775783 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-03-26 02:47:25.775800 | orchestrator | Thursday 26 March 2026 02:47:24 +0000 (0:00:00.146) 0:00:15.909 ******** 2026-03-26 02:47:25.775828 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-93e8c9a2-b6ff-5fe0-a79e-2922336c3e0a', 'data_vg': 'ceph-93e8c9a2-b6ff-5fe0-a79e-2922336c3e0a'})  2026-03-26 02:47:25.775846 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e2623153-bc41-510f-8884-ef957bb96082', 'data_vg': 'ceph-e2623153-bc41-510f-8884-ef957bb96082'})  2026-03-26 02:47:25.775864 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:47:25.775882 | orchestrator | 2026-03-26 02:47:25.775900 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-03-26 02:47:25.775918 | orchestrator | Thursday 26 March 2026 02:47:24 +0000 (0:00:00.161) 0:00:16.071 ******** 2026-03-26 02:47:25.775935 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:47:25.775954 | orchestrator | 2026-03-26 02:47:25.775971 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-03-26 02:47:25.775989 | orchestrator | Thursday 26 March 2026 02:47:24 +0000 (0:00:00.141) 0:00:16.212 ******** 2026-03-26 02:47:25.776006 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-93e8c9a2-b6ff-5fe0-a79e-2922336c3e0a', 'data_vg': 'ceph-93e8c9a2-b6ff-5fe0-a79e-2922336c3e0a'})  2026-03-26 02:47:25.776025 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e2623153-bc41-510f-8884-ef957bb96082', 'data_vg': 'ceph-e2623153-bc41-510f-8884-ef957bb96082'})  2026-03-26 02:47:25.776044 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:47:25.776063 | orchestrator | 2026-03-26 02:47:25.776082 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-03-26 02:47:25.776101 | orchestrator | Thursday 26 March 2026 02:47:24 +0000 (0:00:00.181) 0:00:16.393 ******** 2026-03-26 02:47:25.776121 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:47:25.776141 | orchestrator | 2026-03-26 02:47:25.776160 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-03-26 02:47:25.776184 | orchestrator | Thursday 26 March 2026 02:47:24 +0000 (0:00:00.156) 0:00:16.550 ******** 2026-03-26 02:47:25.776212 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-93e8c9a2-b6ff-5fe0-a79e-2922336c3e0a', 'data_vg': 'ceph-93e8c9a2-b6ff-5fe0-a79e-2922336c3e0a'})  2026-03-26 02:47:25.776263 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e2623153-bc41-510f-8884-ef957bb96082', 'data_vg': 'ceph-e2623153-bc41-510f-8884-ef957bb96082'})  2026-03-26 02:47:25.776282 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:47:25.776299 | orchestrator | 2026-03-26 02:47:25.776317 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-03-26 02:47:25.776333 | orchestrator | Thursday 26 March 2026 02:47:24 +0000 (0:00:00.168) 0:00:16.719 ******** 2026-03-26 02:47:25.776351 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-93e8c9a2-b6ff-5fe0-a79e-2922336c3e0a', 'data_vg': 'ceph-93e8c9a2-b6ff-5fe0-a79e-2922336c3e0a'})  2026-03-26 02:47:25.776369 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e2623153-bc41-510f-8884-ef957bb96082', 'data_vg': 'ceph-e2623153-bc41-510f-8884-ef957bb96082'})  2026-03-26 02:47:25.776385 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:47:25.776402 | orchestrator | 2026-03-26 02:47:25.776418 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-03-26 02:47:25.776433 | orchestrator | Thursday 26 March 2026 02:47:25 +0000 (0:00:00.406) 0:00:17.125 ******** 2026-03-26 02:47:25.776449 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-93e8c9a2-b6ff-5fe0-a79e-2922336c3e0a', 'data_vg': 'ceph-93e8c9a2-b6ff-5fe0-a79e-2922336c3e0a'})  2026-03-26 02:47:25.776465 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e2623153-bc41-510f-8884-ef957bb96082', 'data_vg': 'ceph-e2623153-bc41-510f-8884-ef957bb96082'})  2026-03-26 02:47:25.776500 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:47:25.776519 | orchestrator | 2026-03-26 02:47:25.776539 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-03-26 02:47:25.776557 | orchestrator | Thursday 26 March 2026 02:47:25 +0000 (0:00:00.238) 0:00:17.364 ******** 2026-03-26 02:47:25.776576 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:47:25.776588 | orchestrator | 2026-03-26 02:47:25.776599 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-03-26 02:47:25.776628 | orchestrator | Thursday 26 March 2026 02:47:25 +0000 (0:00:00.143) 0:00:17.507 ******** 2026-03-26 02:47:32.655105 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:47:32.655307 | orchestrator | 2026-03-26 02:47:32.655329 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-03-26 02:47:32.655343 | orchestrator | Thursday 26 March 2026 02:47:25 +0000 (0:00:00.150) 0:00:17.657 ******** 2026-03-26 02:47:32.655358 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:47:32.655378 | orchestrator | 2026-03-26 02:47:32.655438 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-03-26 02:47:32.655460 | orchestrator | Thursday 26 March 2026 02:47:26 +0000 (0:00:00.151) 0:00:17.808 ******** 2026-03-26 02:47:32.655481 | orchestrator | ok: [testbed-node-3] => { 2026-03-26 02:47:32.655500 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-03-26 02:47:32.655520 | orchestrator | } 2026-03-26 02:47:32.655532 | orchestrator | 2026-03-26 02:47:32.655543 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-03-26 02:47:32.655555 | orchestrator | Thursday 26 March 2026 02:47:26 +0000 (0:00:00.156) 0:00:17.965 ******** 2026-03-26 02:47:32.655568 | orchestrator | ok: [testbed-node-3] => { 2026-03-26 02:47:32.655587 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-03-26 02:47:32.655608 | orchestrator | } 2026-03-26 02:47:32.655629 | orchestrator | 2026-03-26 02:47:32.655648 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-03-26 02:47:32.655689 | orchestrator | Thursday 26 March 2026 02:47:26 +0000 (0:00:00.154) 0:00:18.119 ******** 2026-03-26 02:47:32.655712 | orchestrator | ok: [testbed-node-3] => { 2026-03-26 02:47:32.655732 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-03-26 02:47:32.655752 | orchestrator | } 2026-03-26 02:47:32.655773 | orchestrator | 2026-03-26 02:47:32.655792 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-03-26 02:47:32.655811 | orchestrator | Thursday 26 March 2026 02:47:26 +0000 (0:00:00.178) 0:00:18.298 ******** 2026-03-26 02:47:32.655833 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:47:32.655852 | orchestrator | 2026-03-26 02:47:32.655873 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-03-26 02:47:32.655893 | orchestrator | Thursday 26 March 2026 02:47:27 +0000 (0:00:00.701) 0:00:18.999 ******** 2026-03-26 02:47:32.655914 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:47:32.655933 | orchestrator | 2026-03-26 02:47:32.655954 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-03-26 02:47:32.655974 | orchestrator | Thursday 26 March 2026 02:47:27 +0000 (0:00:00.503) 0:00:19.502 ******** 2026-03-26 02:47:32.655994 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:47:32.656013 | orchestrator | 2026-03-26 02:47:32.656033 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-03-26 02:47:32.656052 | orchestrator | Thursday 26 March 2026 02:47:28 +0000 (0:00:00.523) 0:00:20.026 ******** 2026-03-26 02:47:32.656073 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:47:32.656092 | orchestrator | 2026-03-26 02:47:32.656113 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-03-26 02:47:32.656132 | orchestrator | Thursday 26 March 2026 02:47:28 +0000 (0:00:00.389) 0:00:20.416 ******** 2026-03-26 02:47:32.656151 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:47:32.656171 | orchestrator | 2026-03-26 02:47:32.656191 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-03-26 02:47:32.656270 | orchestrator | Thursday 26 March 2026 02:47:28 +0000 (0:00:00.120) 0:00:20.536 ******** 2026-03-26 02:47:32.656285 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:47:32.656296 | orchestrator | 2026-03-26 02:47:32.656307 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-03-26 02:47:32.656319 | orchestrator | Thursday 26 March 2026 02:47:28 +0000 (0:00:00.126) 0:00:20.663 ******** 2026-03-26 02:47:32.656330 | orchestrator | ok: [testbed-node-3] => { 2026-03-26 02:47:32.656341 | orchestrator |  "vgs_report": { 2026-03-26 02:47:32.656352 | orchestrator |  "vg": [] 2026-03-26 02:47:32.656363 | orchestrator |  } 2026-03-26 02:47:32.656374 | orchestrator | } 2026-03-26 02:47:32.656385 | orchestrator | 2026-03-26 02:47:32.656396 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-03-26 02:47:32.656407 | orchestrator | Thursday 26 March 2026 02:47:29 +0000 (0:00:00.147) 0:00:20.811 ******** 2026-03-26 02:47:32.656418 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:47:32.656429 | orchestrator | 2026-03-26 02:47:32.656440 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-03-26 02:47:32.656450 | orchestrator | Thursday 26 March 2026 02:47:29 +0000 (0:00:00.138) 0:00:20.949 ******** 2026-03-26 02:47:32.656461 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:47:32.656472 | orchestrator | 2026-03-26 02:47:32.656483 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-03-26 02:47:32.656494 | orchestrator | Thursday 26 March 2026 02:47:29 +0000 (0:00:00.143) 0:00:21.093 ******** 2026-03-26 02:47:32.656505 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:47:32.656516 | orchestrator | 2026-03-26 02:47:32.656526 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-03-26 02:47:32.656537 | orchestrator | Thursday 26 March 2026 02:47:29 +0000 (0:00:00.154) 0:00:21.247 ******** 2026-03-26 02:47:32.656548 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:47:32.656559 | orchestrator | 2026-03-26 02:47:32.656570 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-03-26 02:47:32.656581 | orchestrator | Thursday 26 March 2026 02:47:29 +0000 (0:00:00.151) 0:00:21.399 ******** 2026-03-26 02:47:32.656591 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:47:32.656602 | orchestrator | 2026-03-26 02:47:32.656613 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-03-26 02:47:32.656624 | orchestrator | Thursday 26 March 2026 02:47:29 +0000 (0:00:00.155) 0:00:21.554 ******** 2026-03-26 02:47:32.656635 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:47:32.656646 | orchestrator | 2026-03-26 02:47:32.656657 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-03-26 02:47:32.656667 | orchestrator | Thursday 26 March 2026 02:47:29 +0000 (0:00:00.154) 0:00:21.709 ******** 2026-03-26 02:47:32.656678 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:47:32.656689 | orchestrator | 2026-03-26 02:47:32.656700 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-03-26 02:47:32.656711 | orchestrator | Thursday 26 March 2026 02:47:30 +0000 (0:00:00.144) 0:00:21.854 ******** 2026-03-26 02:47:32.656745 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:47:32.656757 | orchestrator | 2026-03-26 02:47:32.656768 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-03-26 02:47:32.656779 | orchestrator | Thursday 26 March 2026 02:47:30 +0000 (0:00:00.370) 0:00:22.224 ******** 2026-03-26 02:47:32.656790 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:47:32.656801 | orchestrator | 2026-03-26 02:47:32.656812 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-03-26 02:47:32.656823 | orchestrator | Thursday 26 March 2026 02:47:30 +0000 (0:00:00.149) 0:00:22.373 ******** 2026-03-26 02:47:32.656834 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:47:32.656845 | orchestrator | 2026-03-26 02:47:32.656856 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-03-26 02:47:32.656867 | orchestrator | Thursday 26 March 2026 02:47:30 +0000 (0:00:00.162) 0:00:22.535 ******** 2026-03-26 02:47:32.656886 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:47:32.656897 | orchestrator | 2026-03-26 02:47:32.656908 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-03-26 02:47:32.656919 | orchestrator | Thursday 26 March 2026 02:47:30 +0000 (0:00:00.156) 0:00:22.692 ******** 2026-03-26 02:47:32.656930 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:47:32.656941 | orchestrator | 2026-03-26 02:47:32.656959 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-03-26 02:47:32.656970 | orchestrator | Thursday 26 March 2026 02:47:31 +0000 (0:00:00.149) 0:00:22.842 ******** 2026-03-26 02:47:32.656981 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:47:32.656992 | orchestrator | 2026-03-26 02:47:32.657003 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-03-26 02:47:32.657013 | orchestrator | Thursday 26 March 2026 02:47:31 +0000 (0:00:00.150) 0:00:22.992 ******** 2026-03-26 02:47:32.657024 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:47:32.657035 | orchestrator | 2026-03-26 02:47:32.657046 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-03-26 02:47:32.657057 | orchestrator | Thursday 26 March 2026 02:47:31 +0000 (0:00:00.142) 0:00:23.134 ******** 2026-03-26 02:47:32.657069 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-93e8c9a2-b6ff-5fe0-a79e-2922336c3e0a', 'data_vg': 'ceph-93e8c9a2-b6ff-5fe0-a79e-2922336c3e0a'})  2026-03-26 02:47:32.657081 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e2623153-bc41-510f-8884-ef957bb96082', 'data_vg': 'ceph-e2623153-bc41-510f-8884-ef957bb96082'})  2026-03-26 02:47:32.657092 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:47:32.657103 | orchestrator | 2026-03-26 02:47:32.657114 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-03-26 02:47:32.657125 | orchestrator | Thursday 26 March 2026 02:47:31 +0000 (0:00:00.170) 0:00:23.305 ******** 2026-03-26 02:47:32.657136 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-93e8c9a2-b6ff-5fe0-a79e-2922336c3e0a', 'data_vg': 'ceph-93e8c9a2-b6ff-5fe0-a79e-2922336c3e0a'})  2026-03-26 02:47:32.657147 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e2623153-bc41-510f-8884-ef957bb96082', 'data_vg': 'ceph-e2623153-bc41-510f-8884-ef957bb96082'})  2026-03-26 02:47:32.657157 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:47:32.657168 | orchestrator | 2026-03-26 02:47:32.657179 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-03-26 02:47:32.657190 | orchestrator | Thursday 26 March 2026 02:47:31 +0000 (0:00:00.167) 0:00:23.472 ******** 2026-03-26 02:47:32.657201 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-93e8c9a2-b6ff-5fe0-a79e-2922336c3e0a', 'data_vg': 'ceph-93e8c9a2-b6ff-5fe0-a79e-2922336c3e0a'})  2026-03-26 02:47:32.657212 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e2623153-bc41-510f-8884-ef957bb96082', 'data_vg': 'ceph-e2623153-bc41-510f-8884-ef957bb96082'})  2026-03-26 02:47:32.657223 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:47:32.657297 | orchestrator | 2026-03-26 02:47:32.657308 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-03-26 02:47:32.657319 | orchestrator | Thursday 26 March 2026 02:47:31 +0000 (0:00:00.166) 0:00:23.639 ******** 2026-03-26 02:47:32.657330 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-93e8c9a2-b6ff-5fe0-a79e-2922336c3e0a', 'data_vg': 'ceph-93e8c9a2-b6ff-5fe0-a79e-2922336c3e0a'})  2026-03-26 02:47:32.657341 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e2623153-bc41-510f-8884-ef957bb96082', 'data_vg': 'ceph-e2623153-bc41-510f-8884-ef957bb96082'})  2026-03-26 02:47:32.657352 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:47:32.657363 | orchestrator | 2026-03-26 02:47:32.657374 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-03-26 02:47:32.657385 | orchestrator | Thursday 26 March 2026 02:47:32 +0000 (0:00:00.166) 0:00:23.806 ******** 2026-03-26 02:47:32.657404 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-93e8c9a2-b6ff-5fe0-a79e-2922336c3e0a', 'data_vg': 'ceph-93e8c9a2-b6ff-5fe0-a79e-2922336c3e0a'})  2026-03-26 02:47:32.657415 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e2623153-bc41-510f-8884-ef957bb96082', 'data_vg': 'ceph-e2623153-bc41-510f-8884-ef957bb96082'})  2026-03-26 02:47:32.657429 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:47:32.657449 | orchestrator | 2026-03-26 02:47:32.657469 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-03-26 02:47:32.657489 | orchestrator | Thursday 26 March 2026 02:47:32 +0000 (0:00:00.415) 0:00:24.221 ******** 2026-03-26 02:47:32.657522 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-93e8c9a2-b6ff-5fe0-a79e-2922336c3e0a', 'data_vg': 'ceph-93e8c9a2-b6ff-5fe0-a79e-2922336c3e0a'})  2026-03-26 02:47:38.411633 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e2623153-bc41-510f-8884-ef957bb96082', 'data_vg': 'ceph-e2623153-bc41-510f-8884-ef957bb96082'})  2026-03-26 02:47:38.411738 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:47:38.411758 | orchestrator | 2026-03-26 02:47:38.411773 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-03-26 02:47:38.411790 | orchestrator | Thursday 26 March 2026 02:47:32 +0000 (0:00:00.171) 0:00:24.392 ******** 2026-03-26 02:47:38.411800 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-93e8c9a2-b6ff-5fe0-a79e-2922336c3e0a', 'data_vg': 'ceph-93e8c9a2-b6ff-5fe0-a79e-2922336c3e0a'})  2026-03-26 02:47:38.411809 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e2623153-bc41-510f-8884-ef957bb96082', 'data_vg': 'ceph-e2623153-bc41-510f-8884-ef957bb96082'})  2026-03-26 02:47:38.411817 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:47:38.411825 | orchestrator | 2026-03-26 02:47:38.411848 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-03-26 02:47:38.411856 | orchestrator | Thursday 26 March 2026 02:47:32 +0000 (0:00:00.180) 0:00:24.573 ******** 2026-03-26 02:47:38.411865 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-93e8c9a2-b6ff-5fe0-a79e-2922336c3e0a', 'data_vg': 'ceph-93e8c9a2-b6ff-5fe0-a79e-2922336c3e0a'})  2026-03-26 02:47:38.411873 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e2623153-bc41-510f-8884-ef957bb96082', 'data_vg': 'ceph-e2623153-bc41-510f-8884-ef957bb96082'})  2026-03-26 02:47:38.411881 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:47:38.411889 | orchestrator | 2026-03-26 02:47:38.411897 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-03-26 02:47:38.411905 | orchestrator | Thursday 26 March 2026 02:47:33 +0000 (0:00:00.184) 0:00:24.757 ******** 2026-03-26 02:47:38.411913 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:47:38.411922 | orchestrator | 2026-03-26 02:47:38.411930 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-03-26 02:47:38.411938 | orchestrator | Thursday 26 March 2026 02:47:33 +0000 (0:00:00.531) 0:00:25.289 ******** 2026-03-26 02:47:38.411946 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:47:38.411954 | orchestrator | 2026-03-26 02:47:38.411962 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-03-26 02:47:38.411970 | orchestrator | Thursday 26 March 2026 02:47:34 +0000 (0:00:00.530) 0:00:25.819 ******** 2026-03-26 02:47:38.411978 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:47:38.411986 | orchestrator | 2026-03-26 02:47:38.411994 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-03-26 02:47:38.412002 | orchestrator | Thursday 26 March 2026 02:47:34 +0000 (0:00:00.165) 0:00:25.985 ******** 2026-03-26 02:47:38.412011 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-93e8c9a2-b6ff-5fe0-a79e-2922336c3e0a', 'vg_name': 'ceph-93e8c9a2-b6ff-5fe0-a79e-2922336c3e0a'}) 2026-03-26 02:47:38.412020 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-e2623153-bc41-510f-8884-ef957bb96082', 'vg_name': 'ceph-e2623153-bc41-510f-8884-ef957bb96082'}) 2026-03-26 02:47:38.412046 | orchestrator | 2026-03-26 02:47:38.412054 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-03-26 02:47:38.412062 | orchestrator | Thursday 26 March 2026 02:47:34 +0000 (0:00:00.211) 0:00:26.197 ******** 2026-03-26 02:47:38.412070 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-93e8c9a2-b6ff-5fe0-a79e-2922336c3e0a', 'data_vg': 'ceph-93e8c9a2-b6ff-5fe0-a79e-2922336c3e0a'})  2026-03-26 02:47:38.412078 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e2623153-bc41-510f-8884-ef957bb96082', 'data_vg': 'ceph-e2623153-bc41-510f-8884-ef957bb96082'})  2026-03-26 02:47:38.412086 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:47:38.412094 | orchestrator | 2026-03-26 02:47:38.412102 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-03-26 02:47:38.412110 | orchestrator | Thursday 26 March 2026 02:47:34 +0000 (0:00:00.180) 0:00:26.377 ******** 2026-03-26 02:47:38.412118 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-93e8c9a2-b6ff-5fe0-a79e-2922336c3e0a', 'data_vg': 'ceph-93e8c9a2-b6ff-5fe0-a79e-2922336c3e0a'})  2026-03-26 02:47:38.412126 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e2623153-bc41-510f-8884-ef957bb96082', 'data_vg': 'ceph-e2623153-bc41-510f-8884-ef957bb96082'})  2026-03-26 02:47:38.412134 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:47:38.412142 | orchestrator | 2026-03-26 02:47:38.412150 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-03-26 02:47:38.412158 | orchestrator | Thursday 26 March 2026 02:47:34 +0000 (0:00:00.186) 0:00:26.564 ******** 2026-03-26 02:47:38.412167 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-93e8c9a2-b6ff-5fe0-a79e-2922336c3e0a', 'data_vg': 'ceph-93e8c9a2-b6ff-5fe0-a79e-2922336c3e0a'})  2026-03-26 02:47:38.412176 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e2623153-bc41-510f-8884-ef957bb96082', 'data_vg': 'ceph-e2623153-bc41-510f-8884-ef957bb96082'})  2026-03-26 02:47:38.412185 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:47:38.412194 | orchestrator | 2026-03-26 02:47:38.412203 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-03-26 02:47:38.412212 | orchestrator | Thursday 26 March 2026 02:47:35 +0000 (0:00:00.190) 0:00:26.754 ******** 2026-03-26 02:47:38.412255 | orchestrator | ok: [testbed-node-3] => { 2026-03-26 02:47:38.412266 | orchestrator |  "lvm_report": { 2026-03-26 02:47:38.412275 | orchestrator |  "lv": [ 2026-03-26 02:47:38.412284 | orchestrator |  { 2026-03-26 02:47:38.412294 | orchestrator |  "lv_name": "osd-block-93e8c9a2-b6ff-5fe0-a79e-2922336c3e0a", 2026-03-26 02:47:38.412303 | orchestrator |  "vg_name": "ceph-93e8c9a2-b6ff-5fe0-a79e-2922336c3e0a" 2026-03-26 02:47:38.412312 | orchestrator |  }, 2026-03-26 02:47:38.412321 | orchestrator |  { 2026-03-26 02:47:38.412330 | orchestrator |  "lv_name": "osd-block-e2623153-bc41-510f-8884-ef957bb96082", 2026-03-26 02:47:38.412340 | orchestrator |  "vg_name": "ceph-e2623153-bc41-510f-8884-ef957bb96082" 2026-03-26 02:47:38.412349 | orchestrator |  } 2026-03-26 02:47:38.412358 | orchestrator |  ], 2026-03-26 02:47:38.412367 | orchestrator |  "pv": [ 2026-03-26 02:47:38.412376 | orchestrator |  { 2026-03-26 02:47:38.412386 | orchestrator |  "pv_name": "/dev/sdb", 2026-03-26 02:47:38.412395 | orchestrator |  "vg_name": "ceph-93e8c9a2-b6ff-5fe0-a79e-2922336c3e0a" 2026-03-26 02:47:38.412404 | orchestrator |  }, 2026-03-26 02:47:38.412413 | orchestrator |  { 2026-03-26 02:47:38.412428 | orchestrator |  "pv_name": "/dev/sdc", 2026-03-26 02:47:38.412437 | orchestrator |  "vg_name": "ceph-e2623153-bc41-510f-8884-ef957bb96082" 2026-03-26 02:47:38.412446 | orchestrator |  } 2026-03-26 02:47:38.412456 | orchestrator |  ] 2026-03-26 02:47:38.412464 | orchestrator |  } 2026-03-26 02:47:38.412474 | orchestrator | } 2026-03-26 02:47:38.412489 | orchestrator | 2026-03-26 02:47:38.412498 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-03-26 02:47:38.412507 | orchestrator | 2026-03-26 02:47:38.412517 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-26 02:47:38.412526 | orchestrator | Thursday 26 March 2026 02:47:35 +0000 (0:00:00.577) 0:00:27.332 ******** 2026-03-26 02:47:38.412534 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-03-26 02:47:38.412542 | orchestrator | 2026-03-26 02:47:38.412550 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-26 02:47:38.412559 | orchestrator | Thursday 26 March 2026 02:47:35 +0000 (0:00:00.281) 0:00:27.613 ******** 2026-03-26 02:47:38.412566 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:47:38.412574 | orchestrator | 2026-03-26 02:47:38.412582 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-26 02:47:38.412590 | orchestrator | Thursday 26 March 2026 02:47:36 +0000 (0:00:00.254) 0:00:27.868 ******** 2026-03-26 02:47:38.412598 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-03-26 02:47:38.412606 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-03-26 02:47:38.412614 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-03-26 02:47:38.412622 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-03-26 02:47:38.412630 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-03-26 02:47:38.412638 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-03-26 02:47:38.412646 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-03-26 02:47:38.412654 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-03-26 02:47:38.412662 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-03-26 02:47:38.412670 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-03-26 02:47:38.412678 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-03-26 02:47:38.412686 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-03-26 02:47:38.412694 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-03-26 02:47:38.412702 | orchestrator | 2026-03-26 02:47:38.412710 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-26 02:47:38.412717 | orchestrator | Thursday 26 March 2026 02:47:36 +0000 (0:00:00.487) 0:00:28.355 ******** 2026-03-26 02:47:38.412725 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:47:38.412733 | orchestrator | 2026-03-26 02:47:38.412741 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-26 02:47:38.412749 | orchestrator | Thursday 26 March 2026 02:47:36 +0000 (0:00:00.219) 0:00:28.575 ******** 2026-03-26 02:47:38.412757 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:47:38.412765 | orchestrator | 2026-03-26 02:47:38.412773 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-26 02:47:38.412781 | orchestrator | Thursday 26 March 2026 02:47:37 +0000 (0:00:00.232) 0:00:28.807 ******** 2026-03-26 02:47:38.412789 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:47:38.412798 | orchestrator | 2026-03-26 02:47:38.412806 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-26 02:47:38.412814 | orchestrator | Thursday 26 March 2026 02:47:37 +0000 (0:00:00.214) 0:00:29.022 ******** 2026-03-26 02:47:38.412822 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:47:38.412830 | orchestrator | 2026-03-26 02:47:38.412838 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-26 02:47:38.412846 | orchestrator | Thursday 26 March 2026 02:47:37 +0000 (0:00:00.206) 0:00:29.228 ******** 2026-03-26 02:47:38.412859 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:47:38.412867 | orchestrator | 2026-03-26 02:47:38.412875 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-26 02:47:38.412883 | orchestrator | Thursday 26 March 2026 02:47:37 +0000 (0:00:00.229) 0:00:29.457 ******** 2026-03-26 02:47:38.412891 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:47:38.412899 | orchestrator | 2026-03-26 02:47:38.412913 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-26 02:47:49.424599 | orchestrator | Thursday 26 March 2026 02:47:38 +0000 (0:00:00.689) 0:00:30.147 ******** 2026-03-26 02:47:49.424703 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:47:49.424713 | orchestrator | 2026-03-26 02:47:49.424719 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-26 02:47:49.424725 | orchestrator | Thursday 26 March 2026 02:47:38 +0000 (0:00:00.240) 0:00:30.387 ******** 2026-03-26 02:47:49.424731 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:47:49.424736 | orchestrator | 2026-03-26 02:47:49.424742 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-26 02:47:49.424747 | orchestrator | Thursday 26 March 2026 02:47:38 +0000 (0:00:00.218) 0:00:30.606 ******** 2026-03-26 02:47:49.424752 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_48d73a84-835d-480a-92c3-3edf7ed142ea) 2026-03-26 02:47:49.424759 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_48d73a84-835d-480a-92c3-3edf7ed142ea) 2026-03-26 02:47:49.424764 | orchestrator | 2026-03-26 02:47:49.424782 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-26 02:47:49.424787 | orchestrator | Thursday 26 March 2026 02:47:39 +0000 (0:00:00.445) 0:00:31.051 ******** 2026-03-26 02:47:49.424792 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_7db5f133-fe7b-42a4-ad57-b076dc1856ab) 2026-03-26 02:47:49.424798 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_7db5f133-fe7b-42a4-ad57-b076dc1856ab) 2026-03-26 02:47:49.424807 | orchestrator | 2026-03-26 02:47:49.424814 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-26 02:47:49.424822 | orchestrator | Thursday 26 March 2026 02:47:39 +0000 (0:00:00.482) 0:00:31.534 ******** 2026-03-26 02:47:49.424830 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_a52ec37c-b4ea-4f83-9b16-3c0f6ce85263) 2026-03-26 02:47:49.424838 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_a52ec37c-b4ea-4f83-9b16-3c0f6ce85263) 2026-03-26 02:47:49.424846 | orchestrator | 2026-03-26 02:47:49.424854 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-26 02:47:49.424862 | orchestrator | Thursday 26 March 2026 02:47:40 +0000 (0:00:00.486) 0:00:32.021 ******** 2026-03-26 02:47:49.424872 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_7e352b46-e023-45cf-8a88-51cc46240a44) 2026-03-26 02:47:49.424881 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_7e352b46-e023-45cf-8a88-51cc46240a44) 2026-03-26 02:47:49.424890 | orchestrator | 2026-03-26 02:47:49.424898 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-26 02:47:49.424906 | orchestrator | Thursday 26 March 2026 02:47:40 +0000 (0:00:00.503) 0:00:32.524 ******** 2026-03-26 02:47:49.424912 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-26 02:47:49.424918 | orchestrator | 2026-03-26 02:47:49.424923 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-26 02:47:49.424928 | orchestrator | Thursday 26 March 2026 02:47:41 +0000 (0:00:00.376) 0:00:32.900 ******** 2026-03-26 02:47:49.424933 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-03-26 02:47:49.424939 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-03-26 02:47:49.424944 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-03-26 02:47:49.424967 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-03-26 02:47:49.424973 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-03-26 02:47:49.424978 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-03-26 02:47:49.424983 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-03-26 02:47:49.424988 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-03-26 02:47:49.424993 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-03-26 02:47:49.424998 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-03-26 02:47:49.425003 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-03-26 02:47:49.425008 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-03-26 02:47:49.425013 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-03-26 02:47:49.425019 | orchestrator | 2026-03-26 02:47:49.425024 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-26 02:47:49.425029 | orchestrator | Thursday 26 March 2026 02:47:41 +0000 (0:00:00.439) 0:00:33.340 ******** 2026-03-26 02:47:49.425034 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:47:49.425039 | orchestrator | 2026-03-26 02:47:49.425044 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-26 02:47:49.425049 | orchestrator | Thursday 26 March 2026 02:47:41 +0000 (0:00:00.210) 0:00:33.551 ******** 2026-03-26 02:47:49.425055 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:47:49.425060 | orchestrator | 2026-03-26 02:47:49.425065 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-26 02:47:49.425070 | orchestrator | Thursday 26 March 2026 02:47:42 +0000 (0:00:00.248) 0:00:33.799 ******** 2026-03-26 02:47:49.425075 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:47:49.425080 | orchestrator | 2026-03-26 02:47:49.425101 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-26 02:47:49.425110 | orchestrator | Thursday 26 March 2026 02:47:42 +0000 (0:00:00.706) 0:00:34.506 ******** 2026-03-26 02:47:49.425118 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:47:49.425127 | orchestrator | 2026-03-26 02:47:49.425135 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-26 02:47:49.425144 | orchestrator | Thursday 26 March 2026 02:47:42 +0000 (0:00:00.221) 0:00:34.727 ******** 2026-03-26 02:47:49.425149 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:47:49.425155 | orchestrator | 2026-03-26 02:47:49.425160 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-26 02:47:49.425166 | orchestrator | Thursday 26 March 2026 02:47:43 +0000 (0:00:00.239) 0:00:34.966 ******** 2026-03-26 02:47:49.425171 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:47:49.425176 | orchestrator | 2026-03-26 02:47:49.425183 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-26 02:47:49.425191 | orchestrator | Thursday 26 March 2026 02:47:43 +0000 (0:00:00.215) 0:00:35.182 ******** 2026-03-26 02:47:49.425203 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:47:49.425212 | orchestrator | 2026-03-26 02:47:49.425221 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-26 02:47:49.425229 | orchestrator | Thursday 26 March 2026 02:47:43 +0000 (0:00:00.229) 0:00:35.412 ******** 2026-03-26 02:47:49.425238 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:47:49.425277 | orchestrator | 2026-03-26 02:47:49.425284 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-26 02:47:49.425289 | orchestrator | Thursday 26 March 2026 02:47:43 +0000 (0:00:00.230) 0:00:35.642 ******** 2026-03-26 02:47:49.425294 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-03-26 02:47:49.425306 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-03-26 02:47:49.425312 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-03-26 02:47:49.425317 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-03-26 02:47:49.425322 | orchestrator | 2026-03-26 02:47:49.425327 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-26 02:47:49.425333 | orchestrator | Thursday 26 March 2026 02:47:44 +0000 (0:00:00.728) 0:00:36.371 ******** 2026-03-26 02:47:49.425338 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:47:49.425343 | orchestrator | 2026-03-26 02:47:49.425348 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-26 02:47:49.425353 | orchestrator | Thursday 26 March 2026 02:47:44 +0000 (0:00:00.236) 0:00:36.608 ******** 2026-03-26 02:47:49.425358 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:47:49.425364 | orchestrator | 2026-03-26 02:47:49.425369 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-26 02:47:49.425374 | orchestrator | Thursday 26 March 2026 02:47:45 +0000 (0:00:00.222) 0:00:36.831 ******** 2026-03-26 02:47:49.425379 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:47:49.425385 | orchestrator | 2026-03-26 02:47:49.425390 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-26 02:47:49.425395 | orchestrator | Thursday 26 March 2026 02:47:45 +0000 (0:00:00.241) 0:00:37.073 ******** 2026-03-26 02:47:49.425400 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:47:49.425405 | orchestrator | 2026-03-26 02:47:49.425410 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-03-26 02:47:49.425415 | orchestrator | Thursday 26 March 2026 02:47:45 +0000 (0:00:00.219) 0:00:37.292 ******** 2026-03-26 02:47:49.425421 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:47:49.425426 | orchestrator | 2026-03-26 02:47:49.425431 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-03-26 02:47:49.425436 | orchestrator | Thursday 26 March 2026 02:47:45 +0000 (0:00:00.384) 0:00:37.677 ******** 2026-03-26 02:47:49.425441 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a652979e-9f40-503a-bbc8-6de5e605991e'}}) 2026-03-26 02:47:49.425447 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b5eee7c3-8883-5bbe-be5a-75726e822543'}}) 2026-03-26 02:47:49.425452 | orchestrator | 2026-03-26 02:47:49.425457 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-03-26 02:47:49.425462 | orchestrator | Thursday 26 March 2026 02:47:46 +0000 (0:00:00.206) 0:00:37.883 ******** 2026-03-26 02:47:49.425469 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-a652979e-9f40-503a-bbc8-6de5e605991e', 'data_vg': 'ceph-a652979e-9f40-503a-bbc8-6de5e605991e'}) 2026-03-26 02:47:49.425475 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-b5eee7c3-8883-5bbe-be5a-75726e822543', 'data_vg': 'ceph-b5eee7c3-8883-5bbe-be5a-75726e822543'}) 2026-03-26 02:47:49.425481 | orchestrator | 2026-03-26 02:47:49.425486 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-03-26 02:47:49.425491 | orchestrator | Thursday 26 March 2026 02:47:47 +0000 (0:00:01.739) 0:00:39.622 ******** 2026-03-26 02:47:49.425496 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a652979e-9f40-503a-bbc8-6de5e605991e', 'data_vg': 'ceph-a652979e-9f40-503a-bbc8-6de5e605991e'})  2026-03-26 02:47:49.425502 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b5eee7c3-8883-5bbe-be5a-75726e822543', 'data_vg': 'ceph-b5eee7c3-8883-5bbe-be5a-75726e822543'})  2026-03-26 02:47:49.425507 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:47:49.425512 | orchestrator | 2026-03-26 02:47:49.425518 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-03-26 02:47:49.425523 | orchestrator | Thursday 26 March 2026 02:47:48 +0000 (0:00:00.244) 0:00:39.867 ******** 2026-03-26 02:47:49.425528 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-a652979e-9f40-503a-bbc8-6de5e605991e', 'data_vg': 'ceph-a652979e-9f40-503a-bbc8-6de5e605991e'}) 2026-03-26 02:47:49.425541 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-b5eee7c3-8883-5bbe-be5a-75726e822543', 'data_vg': 'ceph-b5eee7c3-8883-5bbe-be5a-75726e822543'}) 2026-03-26 02:47:55.455223 | orchestrator | 2026-03-26 02:47:55.455449 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-03-26 02:47:55.455482 | orchestrator | Thursday 26 March 2026 02:47:49 +0000 (0:00:01.289) 0:00:41.156 ******** 2026-03-26 02:47:55.455496 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a652979e-9f40-503a-bbc8-6de5e605991e', 'data_vg': 'ceph-a652979e-9f40-503a-bbc8-6de5e605991e'})  2026-03-26 02:47:55.455510 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b5eee7c3-8883-5bbe-be5a-75726e822543', 'data_vg': 'ceph-b5eee7c3-8883-5bbe-be5a-75726e822543'})  2026-03-26 02:47:55.455521 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:47:55.455533 | orchestrator | 2026-03-26 02:47:55.455561 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-03-26 02:47:55.455573 | orchestrator | Thursday 26 March 2026 02:47:49 +0000 (0:00:00.187) 0:00:41.344 ******** 2026-03-26 02:47:55.455584 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:47:55.455595 | orchestrator | 2026-03-26 02:47:55.455606 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-03-26 02:47:55.455617 | orchestrator | Thursday 26 March 2026 02:47:49 +0000 (0:00:00.157) 0:00:41.501 ******** 2026-03-26 02:47:55.455628 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a652979e-9f40-503a-bbc8-6de5e605991e', 'data_vg': 'ceph-a652979e-9f40-503a-bbc8-6de5e605991e'})  2026-03-26 02:47:55.455640 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b5eee7c3-8883-5bbe-be5a-75726e822543', 'data_vg': 'ceph-b5eee7c3-8883-5bbe-be5a-75726e822543'})  2026-03-26 02:47:55.455651 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:47:55.455662 | orchestrator | 2026-03-26 02:47:55.455673 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-03-26 02:47:55.455684 | orchestrator | Thursday 26 March 2026 02:47:49 +0000 (0:00:00.183) 0:00:41.685 ******** 2026-03-26 02:47:55.455695 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:47:55.455706 | orchestrator | 2026-03-26 02:47:55.455717 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-03-26 02:47:55.455728 | orchestrator | Thursday 26 March 2026 02:47:50 +0000 (0:00:00.149) 0:00:41.835 ******** 2026-03-26 02:47:55.455739 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a652979e-9f40-503a-bbc8-6de5e605991e', 'data_vg': 'ceph-a652979e-9f40-503a-bbc8-6de5e605991e'})  2026-03-26 02:47:55.455751 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b5eee7c3-8883-5bbe-be5a-75726e822543', 'data_vg': 'ceph-b5eee7c3-8883-5bbe-be5a-75726e822543'})  2026-03-26 02:47:55.455765 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:47:55.455779 | orchestrator | 2026-03-26 02:47:55.455792 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-03-26 02:47:55.455805 | orchestrator | Thursday 26 March 2026 02:47:50 +0000 (0:00:00.159) 0:00:41.995 ******** 2026-03-26 02:47:55.455818 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:47:55.455830 | orchestrator | 2026-03-26 02:47:55.455842 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-03-26 02:47:55.455854 | orchestrator | Thursday 26 March 2026 02:47:50 +0000 (0:00:00.169) 0:00:42.164 ******** 2026-03-26 02:47:55.455867 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a652979e-9f40-503a-bbc8-6de5e605991e', 'data_vg': 'ceph-a652979e-9f40-503a-bbc8-6de5e605991e'})  2026-03-26 02:47:55.455879 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b5eee7c3-8883-5bbe-be5a-75726e822543', 'data_vg': 'ceph-b5eee7c3-8883-5bbe-be5a-75726e822543'})  2026-03-26 02:47:55.455892 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:47:55.455905 | orchestrator | 2026-03-26 02:47:55.455917 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-03-26 02:47:55.455950 | orchestrator | Thursday 26 March 2026 02:47:50 +0000 (0:00:00.165) 0:00:42.330 ******** 2026-03-26 02:47:55.455963 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:47:55.455977 | orchestrator | 2026-03-26 02:47:55.455990 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-03-26 02:47:55.456002 | orchestrator | Thursday 26 March 2026 02:47:50 +0000 (0:00:00.197) 0:00:42.528 ******** 2026-03-26 02:47:55.456015 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a652979e-9f40-503a-bbc8-6de5e605991e', 'data_vg': 'ceph-a652979e-9f40-503a-bbc8-6de5e605991e'})  2026-03-26 02:47:55.456028 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b5eee7c3-8883-5bbe-be5a-75726e822543', 'data_vg': 'ceph-b5eee7c3-8883-5bbe-be5a-75726e822543'})  2026-03-26 02:47:55.456041 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:47:55.456053 | orchestrator | 2026-03-26 02:47:55.456065 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-03-26 02:47:55.456078 | orchestrator | Thursday 26 March 2026 02:47:51 +0000 (0:00:00.448) 0:00:42.976 ******** 2026-03-26 02:47:55.456090 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a652979e-9f40-503a-bbc8-6de5e605991e', 'data_vg': 'ceph-a652979e-9f40-503a-bbc8-6de5e605991e'})  2026-03-26 02:47:55.456103 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b5eee7c3-8883-5bbe-be5a-75726e822543', 'data_vg': 'ceph-b5eee7c3-8883-5bbe-be5a-75726e822543'})  2026-03-26 02:47:55.456116 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:47:55.456128 | orchestrator | 2026-03-26 02:47:55.456141 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-03-26 02:47:55.456174 | orchestrator | Thursday 26 March 2026 02:47:51 +0000 (0:00:00.161) 0:00:43.138 ******** 2026-03-26 02:47:55.456188 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a652979e-9f40-503a-bbc8-6de5e605991e', 'data_vg': 'ceph-a652979e-9f40-503a-bbc8-6de5e605991e'})  2026-03-26 02:47:55.456201 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b5eee7c3-8883-5bbe-be5a-75726e822543', 'data_vg': 'ceph-b5eee7c3-8883-5bbe-be5a-75726e822543'})  2026-03-26 02:47:55.456214 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:47:55.456226 | orchestrator | 2026-03-26 02:47:55.456237 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-03-26 02:47:55.456248 | orchestrator | Thursday 26 March 2026 02:47:51 +0000 (0:00:00.167) 0:00:43.306 ******** 2026-03-26 02:47:55.456292 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:47:55.456304 | orchestrator | 2026-03-26 02:47:55.456315 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-03-26 02:47:55.456326 | orchestrator | Thursday 26 March 2026 02:47:51 +0000 (0:00:00.172) 0:00:43.478 ******** 2026-03-26 02:47:55.456337 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:47:55.456348 | orchestrator | 2026-03-26 02:47:55.456358 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-03-26 02:47:55.456369 | orchestrator | Thursday 26 March 2026 02:47:51 +0000 (0:00:00.142) 0:00:43.620 ******** 2026-03-26 02:47:55.456387 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:47:55.456406 | orchestrator | 2026-03-26 02:47:55.456424 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-03-26 02:47:55.456442 | orchestrator | Thursday 26 March 2026 02:47:52 +0000 (0:00:00.155) 0:00:43.776 ******** 2026-03-26 02:47:55.456459 | orchestrator | ok: [testbed-node-4] => { 2026-03-26 02:47:55.456477 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-03-26 02:47:55.456496 | orchestrator | } 2026-03-26 02:47:55.456515 | orchestrator | 2026-03-26 02:47:55.456532 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-03-26 02:47:55.456551 | orchestrator | Thursday 26 March 2026 02:47:52 +0000 (0:00:00.141) 0:00:43.917 ******** 2026-03-26 02:47:55.456568 | orchestrator | ok: [testbed-node-4] => { 2026-03-26 02:47:55.456588 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-03-26 02:47:55.456621 | orchestrator | } 2026-03-26 02:47:55.456640 | orchestrator | 2026-03-26 02:47:55.456653 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-03-26 02:47:55.456663 | orchestrator | Thursday 26 March 2026 02:47:52 +0000 (0:00:00.147) 0:00:44.065 ******** 2026-03-26 02:47:55.456674 | orchestrator | ok: [testbed-node-4] => { 2026-03-26 02:47:55.456685 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-03-26 02:47:55.456696 | orchestrator | } 2026-03-26 02:47:55.456707 | orchestrator | 2026-03-26 02:47:55.456718 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-03-26 02:47:55.456729 | orchestrator | Thursday 26 March 2026 02:47:52 +0000 (0:00:00.157) 0:00:44.223 ******** 2026-03-26 02:47:55.456740 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:47:55.456750 | orchestrator | 2026-03-26 02:47:55.456761 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-03-26 02:47:55.456772 | orchestrator | Thursday 26 March 2026 02:47:53 +0000 (0:00:00.520) 0:00:44.743 ******** 2026-03-26 02:47:55.456783 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:47:55.456794 | orchestrator | 2026-03-26 02:47:55.456805 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-03-26 02:47:55.456816 | orchestrator | Thursday 26 March 2026 02:47:53 +0000 (0:00:00.497) 0:00:45.241 ******** 2026-03-26 02:47:55.456826 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:47:55.456837 | orchestrator | 2026-03-26 02:47:55.456853 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-03-26 02:47:55.456871 | orchestrator | Thursday 26 March 2026 02:47:53 +0000 (0:00:00.496) 0:00:45.738 ******** 2026-03-26 02:47:55.456888 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:47:55.456908 | orchestrator | 2026-03-26 02:47:55.456926 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-03-26 02:47:55.456944 | orchestrator | Thursday 26 March 2026 02:47:54 +0000 (0:00:00.418) 0:00:46.157 ******** 2026-03-26 02:47:55.456964 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:47:55.456983 | orchestrator | 2026-03-26 02:47:55.457000 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-03-26 02:47:55.457020 | orchestrator | Thursday 26 March 2026 02:47:54 +0000 (0:00:00.124) 0:00:46.281 ******** 2026-03-26 02:47:55.457032 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:47:55.457043 | orchestrator | 2026-03-26 02:47:55.457054 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-03-26 02:47:55.457065 | orchestrator | Thursday 26 March 2026 02:47:54 +0000 (0:00:00.133) 0:00:46.415 ******** 2026-03-26 02:47:55.457075 | orchestrator | ok: [testbed-node-4] => { 2026-03-26 02:47:55.457087 | orchestrator |  "vgs_report": { 2026-03-26 02:47:55.457098 | orchestrator |  "vg": [] 2026-03-26 02:47:55.457109 | orchestrator |  } 2026-03-26 02:47:55.457120 | orchestrator | } 2026-03-26 02:47:55.457131 | orchestrator | 2026-03-26 02:47:55.457142 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-03-26 02:47:55.457153 | orchestrator | Thursday 26 March 2026 02:47:54 +0000 (0:00:00.159) 0:00:46.575 ******** 2026-03-26 02:47:55.457163 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:47:55.457174 | orchestrator | 2026-03-26 02:47:55.457186 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-03-26 02:47:55.457197 | orchestrator | Thursday 26 March 2026 02:47:54 +0000 (0:00:00.154) 0:00:46.729 ******** 2026-03-26 02:47:55.457207 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:47:55.457218 | orchestrator | 2026-03-26 02:47:55.457229 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-03-26 02:47:55.457240 | orchestrator | Thursday 26 March 2026 02:47:55 +0000 (0:00:00.144) 0:00:46.873 ******** 2026-03-26 02:47:55.457251 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:47:55.457287 | orchestrator | 2026-03-26 02:47:55.457298 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-03-26 02:47:55.457309 | orchestrator | Thursday 26 March 2026 02:47:55 +0000 (0:00:00.171) 0:00:47.045 ******** 2026-03-26 02:47:55.457329 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:47:55.457341 | orchestrator | 2026-03-26 02:47:55.457363 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-03-26 02:48:00.634470 | orchestrator | Thursday 26 March 2026 02:47:55 +0000 (0:00:00.145) 0:00:47.191 ******** 2026-03-26 02:48:00.634608 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:48:00.634636 | orchestrator | 2026-03-26 02:48:00.634657 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-03-26 02:48:00.634676 | orchestrator | Thursday 26 March 2026 02:47:55 +0000 (0:00:00.143) 0:00:47.334 ******** 2026-03-26 02:48:00.634694 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:48:00.634712 | orchestrator | 2026-03-26 02:48:00.634730 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-03-26 02:48:00.634748 | orchestrator | Thursday 26 March 2026 02:47:55 +0000 (0:00:00.150) 0:00:47.484 ******** 2026-03-26 02:48:00.634769 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:48:00.634789 | orchestrator | 2026-03-26 02:48:00.634826 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-03-26 02:48:00.634847 | orchestrator | Thursday 26 March 2026 02:47:55 +0000 (0:00:00.139) 0:00:47.624 ******** 2026-03-26 02:48:00.634867 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:48:00.634880 | orchestrator | 2026-03-26 02:48:00.634891 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-03-26 02:48:00.634909 | orchestrator | Thursday 26 March 2026 02:47:56 +0000 (0:00:00.156) 0:00:47.781 ******** 2026-03-26 02:48:00.634928 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:48:00.634947 | orchestrator | 2026-03-26 02:48:00.634965 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-03-26 02:48:00.634981 | orchestrator | Thursday 26 March 2026 02:47:56 +0000 (0:00:00.375) 0:00:48.156 ******** 2026-03-26 02:48:00.634999 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:48:00.635015 | orchestrator | 2026-03-26 02:48:00.635033 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-03-26 02:48:00.635052 | orchestrator | Thursday 26 March 2026 02:47:56 +0000 (0:00:00.148) 0:00:48.305 ******** 2026-03-26 02:48:00.635070 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:48:00.635091 | orchestrator | 2026-03-26 02:48:00.635109 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-03-26 02:48:00.635125 | orchestrator | Thursday 26 March 2026 02:47:56 +0000 (0:00:00.207) 0:00:48.513 ******** 2026-03-26 02:48:00.635142 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:48:00.635159 | orchestrator | 2026-03-26 02:48:00.635179 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-03-26 02:48:00.635198 | orchestrator | Thursday 26 March 2026 02:47:56 +0000 (0:00:00.156) 0:00:48.669 ******** 2026-03-26 02:48:00.635217 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:48:00.635236 | orchestrator | 2026-03-26 02:48:00.635255 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-03-26 02:48:00.635332 | orchestrator | Thursday 26 March 2026 02:47:57 +0000 (0:00:00.171) 0:00:48.841 ******** 2026-03-26 02:48:00.635351 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:48:00.635370 | orchestrator | 2026-03-26 02:48:00.635390 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-03-26 02:48:00.635409 | orchestrator | Thursday 26 March 2026 02:47:57 +0000 (0:00:00.150) 0:00:48.991 ******** 2026-03-26 02:48:00.635429 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a652979e-9f40-503a-bbc8-6de5e605991e', 'data_vg': 'ceph-a652979e-9f40-503a-bbc8-6de5e605991e'})  2026-03-26 02:48:00.635443 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b5eee7c3-8883-5bbe-be5a-75726e822543', 'data_vg': 'ceph-b5eee7c3-8883-5bbe-be5a-75726e822543'})  2026-03-26 02:48:00.635454 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:48:00.635465 | orchestrator | 2026-03-26 02:48:00.635476 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-03-26 02:48:00.635514 | orchestrator | Thursday 26 March 2026 02:47:57 +0000 (0:00:00.213) 0:00:49.205 ******** 2026-03-26 02:48:00.635526 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a652979e-9f40-503a-bbc8-6de5e605991e', 'data_vg': 'ceph-a652979e-9f40-503a-bbc8-6de5e605991e'})  2026-03-26 02:48:00.635537 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b5eee7c3-8883-5bbe-be5a-75726e822543', 'data_vg': 'ceph-b5eee7c3-8883-5bbe-be5a-75726e822543'})  2026-03-26 02:48:00.635548 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:48:00.635559 | orchestrator | 2026-03-26 02:48:00.635570 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-03-26 02:48:00.635581 | orchestrator | Thursday 26 March 2026 02:47:57 +0000 (0:00:00.183) 0:00:49.388 ******** 2026-03-26 02:48:00.635592 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a652979e-9f40-503a-bbc8-6de5e605991e', 'data_vg': 'ceph-a652979e-9f40-503a-bbc8-6de5e605991e'})  2026-03-26 02:48:00.635606 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b5eee7c3-8883-5bbe-be5a-75726e822543', 'data_vg': 'ceph-b5eee7c3-8883-5bbe-be5a-75726e822543'})  2026-03-26 02:48:00.635624 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:48:00.635643 | orchestrator | 2026-03-26 02:48:00.635661 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-03-26 02:48:00.635682 | orchestrator | Thursday 26 March 2026 02:47:57 +0000 (0:00:00.160) 0:00:49.549 ******** 2026-03-26 02:48:00.635701 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a652979e-9f40-503a-bbc8-6de5e605991e', 'data_vg': 'ceph-a652979e-9f40-503a-bbc8-6de5e605991e'})  2026-03-26 02:48:00.635719 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b5eee7c3-8883-5bbe-be5a-75726e822543', 'data_vg': 'ceph-b5eee7c3-8883-5bbe-be5a-75726e822543'})  2026-03-26 02:48:00.635738 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:48:00.635770 | orchestrator | 2026-03-26 02:48:00.635813 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-03-26 02:48:00.635831 | orchestrator | Thursday 26 March 2026 02:47:57 +0000 (0:00:00.153) 0:00:49.703 ******** 2026-03-26 02:48:00.635849 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a652979e-9f40-503a-bbc8-6de5e605991e', 'data_vg': 'ceph-a652979e-9f40-503a-bbc8-6de5e605991e'})  2026-03-26 02:48:00.635867 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b5eee7c3-8883-5bbe-be5a-75726e822543', 'data_vg': 'ceph-b5eee7c3-8883-5bbe-be5a-75726e822543'})  2026-03-26 02:48:00.635885 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:48:00.635902 | orchestrator | 2026-03-26 02:48:00.635932 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-03-26 02:48:00.635953 | orchestrator | Thursday 26 March 2026 02:47:58 +0000 (0:00:00.174) 0:00:49.877 ******** 2026-03-26 02:48:00.635971 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a652979e-9f40-503a-bbc8-6de5e605991e', 'data_vg': 'ceph-a652979e-9f40-503a-bbc8-6de5e605991e'})  2026-03-26 02:48:00.635991 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b5eee7c3-8883-5bbe-be5a-75726e822543', 'data_vg': 'ceph-b5eee7c3-8883-5bbe-be5a-75726e822543'})  2026-03-26 02:48:00.636002 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:48:00.636013 | orchestrator | 2026-03-26 02:48:00.636024 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-03-26 02:48:00.636035 | orchestrator | Thursday 26 March 2026 02:47:58 +0000 (0:00:00.156) 0:00:50.034 ******** 2026-03-26 02:48:00.636046 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a652979e-9f40-503a-bbc8-6de5e605991e', 'data_vg': 'ceph-a652979e-9f40-503a-bbc8-6de5e605991e'})  2026-03-26 02:48:00.636057 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b5eee7c3-8883-5bbe-be5a-75726e822543', 'data_vg': 'ceph-b5eee7c3-8883-5bbe-be5a-75726e822543'})  2026-03-26 02:48:00.636068 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:48:00.636091 | orchestrator | 2026-03-26 02:48:00.636102 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-03-26 02:48:00.636114 | orchestrator | Thursday 26 March 2026 02:47:58 +0000 (0:00:00.413) 0:00:50.448 ******** 2026-03-26 02:48:00.636125 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a652979e-9f40-503a-bbc8-6de5e605991e', 'data_vg': 'ceph-a652979e-9f40-503a-bbc8-6de5e605991e'})  2026-03-26 02:48:00.636136 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b5eee7c3-8883-5bbe-be5a-75726e822543', 'data_vg': 'ceph-b5eee7c3-8883-5bbe-be5a-75726e822543'})  2026-03-26 02:48:00.636147 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:48:00.636158 | orchestrator | 2026-03-26 02:48:00.636169 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-03-26 02:48:00.636180 | orchestrator | Thursday 26 March 2026 02:47:58 +0000 (0:00:00.182) 0:00:50.631 ******** 2026-03-26 02:48:00.636191 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:48:00.636202 | orchestrator | 2026-03-26 02:48:00.636213 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-03-26 02:48:00.636224 | orchestrator | Thursday 26 March 2026 02:47:59 +0000 (0:00:00.498) 0:00:51.129 ******** 2026-03-26 02:48:00.636235 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:48:00.636246 | orchestrator | 2026-03-26 02:48:00.636256 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-03-26 02:48:00.636301 | orchestrator | Thursday 26 March 2026 02:47:59 +0000 (0:00:00.563) 0:00:51.692 ******** 2026-03-26 02:48:00.636312 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:48:00.636323 | orchestrator | 2026-03-26 02:48:00.636334 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-03-26 02:48:00.636345 | orchestrator | Thursday 26 March 2026 02:48:00 +0000 (0:00:00.177) 0:00:51.870 ******** 2026-03-26 02:48:00.636356 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-a652979e-9f40-503a-bbc8-6de5e605991e', 'vg_name': 'ceph-a652979e-9f40-503a-bbc8-6de5e605991e'}) 2026-03-26 02:48:00.636368 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-b5eee7c3-8883-5bbe-be5a-75726e822543', 'vg_name': 'ceph-b5eee7c3-8883-5bbe-be5a-75726e822543'}) 2026-03-26 02:48:00.636379 | orchestrator | 2026-03-26 02:48:00.636390 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-03-26 02:48:00.636401 | orchestrator | Thursday 26 March 2026 02:48:00 +0000 (0:00:00.169) 0:00:52.039 ******** 2026-03-26 02:48:00.636412 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a652979e-9f40-503a-bbc8-6de5e605991e', 'data_vg': 'ceph-a652979e-9f40-503a-bbc8-6de5e605991e'})  2026-03-26 02:48:00.636423 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b5eee7c3-8883-5bbe-be5a-75726e822543', 'data_vg': 'ceph-b5eee7c3-8883-5bbe-be5a-75726e822543'})  2026-03-26 02:48:00.636434 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:48:00.636445 | orchestrator | 2026-03-26 02:48:00.636456 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-03-26 02:48:00.636467 | orchestrator | Thursday 26 March 2026 02:48:00 +0000 (0:00:00.166) 0:00:52.206 ******** 2026-03-26 02:48:00.636477 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a652979e-9f40-503a-bbc8-6de5e605991e', 'data_vg': 'ceph-a652979e-9f40-503a-bbc8-6de5e605991e'})  2026-03-26 02:48:00.636499 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b5eee7c3-8883-5bbe-be5a-75726e822543', 'data_vg': 'ceph-b5eee7c3-8883-5bbe-be5a-75726e822543'})  2026-03-26 02:48:07.536657 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:48:07.536761 | orchestrator | 2026-03-26 02:48:07.536774 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-03-26 02:48:07.536783 | orchestrator | Thursday 26 March 2026 02:48:00 +0000 (0:00:00.164) 0:00:52.370 ******** 2026-03-26 02:48:07.536791 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a652979e-9f40-503a-bbc8-6de5e605991e', 'data_vg': 'ceph-a652979e-9f40-503a-bbc8-6de5e605991e'})  2026-03-26 02:48:07.536837 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b5eee7c3-8883-5bbe-be5a-75726e822543', 'data_vg': 'ceph-b5eee7c3-8883-5bbe-be5a-75726e822543'})  2026-03-26 02:48:07.536844 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:48:07.536851 | orchestrator | 2026-03-26 02:48:07.536857 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-03-26 02:48:07.536864 | orchestrator | Thursday 26 March 2026 02:48:00 +0000 (0:00:00.169) 0:00:52.539 ******** 2026-03-26 02:48:07.536871 | orchestrator | ok: [testbed-node-4] => { 2026-03-26 02:48:07.536878 | orchestrator |  "lvm_report": { 2026-03-26 02:48:07.536886 | orchestrator |  "lv": [ 2026-03-26 02:48:07.536893 | orchestrator |  { 2026-03-26 02:48:07.536900 | orchestrator |  "lv_name": "osd-block-a652979e-9f40-503a-bbc8-6de5e605991e", 2026-03-26 02:48:07.536908 | orchestrator |  "vg_name": "ceph-a652979e-9f40-503a-bbc8-6de5e605991e" 2026-03-26 02:48:07.536916 | orchestrator |  }, 2026-03-26 02:48:07.536923 | orchestrator |  { 2026-03-26 02:48:07.536930 | orchestrator |  "lv_name": "osd-block-b5eee7c3-8883-5bbe-be5a-75726e822543", 2026-03-26 02:48:07.536937 | orchestrator |  "vg_name": "ceph-b5eee7c3-8883-5bbe-be5a-75726e822543" 2026-03-26 02:48:07.536944 | orchestrator |  } 2026-03-26 02:48:07.536951 | orchestrator |  ], 2026-03-26 02:48:07.536959 | orchestrator |  "pv": [ 2026-03-26 02:48:07.536967 | orchestrator |  { 2026-03-26 02:48:07.536974 | orchestrator |  "pv_name": "/dev/sdb", 2026-03-26 02:48:07.536981 | orchestrator |  "vg_name": "ceph-a652979e-9f40-503a-bbc8-6de5e605991e" 2026-03-26 02:48:07.536989 | orchestrator |  }, 2026-03-26 02:48:07.536997 | orchestrator |  { 2026-03-26 02:48:07.537004 | orchestrator |  "pv_name": "/dev/sdc", 2026-03-26 02:48:07.537012 | orchestrator |  "vg_name": "ceph-b5eee7c3-8883-5bbe-be5a-75726e822543" 2026-03-26 02:48:07.537019 | orchestrator |  } 2026-03-26 02:48:07.537027 | orchestrator |  ] 2026-03-26 02:48:07.537034 | orchestrator |  } 2026-03-26 02:48:07.537042 | orchestrator | } 2026-03-26 02:48:07.537050 | orchestrator | 2026-03-26 02:48:07.537057 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-03-26 02:48:07.537064 | orchestrator | 2026-03-26 02:48:07.537072 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-26 02:48:07.537079 | orchestrator | Thursday 26 March 2026 02:48:01 +0000 (0:00:00.319) 0:00:52.858 ******** 2026-03-26 02:48:07.537086 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-03-26 02:48:07.537094 | orchestrator | 2026-03-26 02:48:07.537102 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-26 02:48:07.537109 | orchestrator | Thursday 26 March 2026 02:48:01 +0000 (0:00:00.763) 0:00:53.622 ******** 2026-03-26 02:48:07.537116 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:48:07.537124 | orchestrator | 2026-03-26 02:48:07.537131 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-26 02:48:07.537139 | orchestrator | Thursday 26 March 2026 02:48:02 +0000 (0:00:00.261) 0:00:53.883 ******** 2026-03-26 02:48:07.537146 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-03-26 02:48:07.537153 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-03-26 02:48:07.537161 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-03-26 02:48:07.537167 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-03-26 02:48:07.537175 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-03-26 02:48:07.537184 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-03-26 02:48:07.537192 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-03-26 02:48:07.537209 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-03-26 02:48:07.537217 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-03-26 02:48:07.537225 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-03-26 02:48:07.537233 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-03-26 02:48:07.537241 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-03-26 02:48:07.537248 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-03-26 02:48:07.537255 | orchestrator | 2026-03-26 02:48:07.537262 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-26 02:48:07.537297 | orchestrator | Thursday 26 March 2026 02:48:02 +0000 (0:00:00.428) 0:00:54.311 ******** 2026-03-26 02:48:07.537305 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:48:07.537312 | orchestrator | 2026-03-26 02:48:07.537319 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-26 02:48:07.537327 | orchestrator | Thursday 26 March 2026 02:48:02 +0000 (0:00:00.209) 0:00:54.521 ******** 2026-03-26 02:48:07.537335 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:48:07.537343 | orchestrator | 2026-03-26 02:48:07.537351 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-26 02:48:07.537376 | orchestrator | Thursday 26 March 2026 02:48:03 +0000 (0:00:00.238) 0:00:54.759 ******** 2026-03-26 02:48:07.537386 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:48:07.537393 | orchestrator | 2026-03-26 02:48:07.537401 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-26 02:48:07.537408 | orchestrator | Thursday 26 March 2026 02:48:03 +0000 (0:00:00.208) 0:00:54.968 ******** 2026-03-26 02:48:07.537416 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:48:07.537424 | orchestrator | 2026-03-26 02:48:07.537431 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-26 02:48:07.537439 | orchestrator | Thursday 26 March 2026 02:48:03 +0000 (0:00:00.231) 0:00:55.199 ******** 2026-03-26 02:48:07.537447 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:48:07.537454 | orchestrator | 2026-03-26 02:48:07.537463 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-26 02:48:07.537471 | orchestrator | Thursday 26 March 2026 02:48:03 +0000 (0:00:00.225) 0:00:55.425 ******** 2026-03-26 02:48:07.537478 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:48:07.537485 | orchestrator | 2026-03-26 02:48:07.537493 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-26 02:48:07.537501 | orchestrator | Thursday 26 March 2026 02:48:03 +0000 (0:00:00.221) 0:00:55.646 ******** 2026-03-26 02:48:07.537510 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:48:07.537516 | orchestrator | 2026-03-26 02:48:07.537524 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-26 02:48:07.537532 | orchestrator | Thursday 26 March 2026 02:48:04 +0000 (0:00:00.238) 0:00:55.885 ******** 2026-03-26 02:48:07.537540 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:48:07.537547 | orchestrator | 2026-03-26 02:48:07.537556 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-26 02:48:07.537564 | orchestrator | Thursday 26 March 2026 02:48:04 +0000 (0:00:00.205) 0:00:56.090 ******** 2026-03-26 02:48:07.537572 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_4fa924fa-33d9-43ce-b208-159d6f6ab539) 2026-03-26 02:48:07.537580 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_4fa924fa-33d9-43ce-b208-159d6f6ab539) 2026-03-26 02:48:07.537587 | orchestrator | 2026-03-26 02:48:07.537594 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-26 02:48:07.537601 | orchestrator | Thursday 26 March 2026 02:48:05 +0000 (0:00:00.934) 0:00:57.024 ******** 2026-03-26 02:48:07.537690 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_943c088c-5b56-4173-ab64-ec81e1cc816d) 2026-03-26 02:48:07.537713 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_943c088c-5b56-4173-ab64-ec81e1cc816d) 2026-03-26 02:48:07.537721 | orchestrator | 2026-03-26 02:48:07.537728 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-26 02:48:07.537735 | orchestrator | Thursday 26 March 2026 02:48:05 +0000 (0:00:00.451) 0:00:57.476 ******** 2026-03-26 02:48:07.537742 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_47760649-09e9-4ed8-8303-e5ee473a8102) 2026-03-26 02:48:07.537749 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_47760649-09e9-4ed8-8303-e5ee473a8102) 2026-03-26 02:48:07.537757 | orchestrator | 2026-03-26 02:48:07.537764 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-26 02:48:07.537771 | orchestrator | Thursday 26 March 2026 02:48:06 +0000 (0:00:00.474) 0:00:57.951 ******** 2026-03-26 02:48:07.537779 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_8ddd7966-84e6-4951-8a08-7b4fb4af2bd2) 2026-03-26 02:48:07.537786 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_8ddd7966-84e6-4951-8a08-7b4fb4af2bd2) 2026-03-26 02:48:07.537794 | orchestrator | 2026-03-26 02:48:07.537801 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-26 02:48:07.537809 | orchestrator | Thursday 26 March 2026 02:48:06 +0000 (0:00:00.494) 0:00:58.445 ******** 2026-03-26 02:48:07.537816 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-26 02:48:07.537823 | orchestrator | 2026-03-26 02:48:07.537831 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-26 02:48:07.537838 | orchestrator | Thursday 26 March 2026 02:48:07 +0000 (0:00:00.362) 0:00:58.808 ******** 2026-03-26 02:48:07.537845 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-03-26 02:48:07.537853 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-03-26 02:48:07.537861 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-03-26 02:48:07.537868 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-03-26 02:48:07.537876 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-03-26 02:48:07.537884 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-03-26 02:48:07.537890 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-03-26 02:48:07.537897 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-03-26 02:48:07.537904 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-03-26 02:48:07.537911 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-03-26 02:48:07.537918 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-03-26 02:48:07.537934 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-03-26 02:48:17.011154 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-03-26 02:48:17.011264 | orchestrator | 2026-03-26 02:48:17.011335 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-26 02:48:17.011348 | orchestrator | Thursday 26 March 2026 02:48:07 +0000 (0:00:00.454) 0:00:59.262 ******** 2026-03-26 02:48:17.011359 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:48:17.011370 | orchestrator | 2026-03-26 02:48:17.011380 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-26 02:48:17.011408 | orchestrator | Thursday 26 March 2026 02:48:07 +0000 (0:00:00.222) 0:00:59.485 ******** 2026-03-26 02:48:17.011420 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:48:17.011451 | orchestrator | 2026-03-26 02:48:17.011461 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-26 02:48:17.011471 | orchestrator | Thursday 26 March 2026 02:48:07 +0000 (0:00:00.210) 0:00:59.695 ******** 2026-03-26 02:48:17.011482 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:48:17.011493 | orchestrator | 2026-03-26 02:48:17.011504 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-26 02:48:17.011514 | orchestrator | Thursday 26 March 2026 02:48:08 +0000 (0:00:00.223) 0:00:59.919 ******** 2026-03-26 02:48:17.011524 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:48:17.011534 | orchestrator | 2026-03-26 02:48:17.011544 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-26 02:48:17.011555 | orchestrator | Thursday 26 March 2026 02:48:08 +0000 (0:00:00.241) 0:01:00.160 ******** 2026-03-26 02:48:17.011564 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:48:17.011575 | orchestrator | 2026-03-26 02:48:17.011586 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-26 02:48:17.011597 | orchestrator | Thursday 26 March 2026 02:48:09 +0000 (0:00:00.699) 0:01:00.860 ******** 2026-03-26 02:48:17.011607 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:48:17.011617 | orchestrator | 2026-03-26 02:48:17.011627 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-26 02:48:17.011637 | orchestrator | Thursday 26 March 2026 02:48:09 +0000 (0:00:00.234) 0:01:01.094 ******** 2026-03-26 02:48:17.011646 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:48:17.011656 | orchestrator | 2026-03-26 02:48:17.011667 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-26 02:48:17.011678 | orchestrator | Thursday 26 March 2026 02:48:09 +0000 (0:00:00.219) 0:01:01.314 ******** 2026-03-26 02:48:17.011690 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:48:17.011700 | orchestrator | 2026-03-26 02:48:17.011709 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-26 02:48:17.011719 | orchestrator | Thursday 26 March 2026 02:48:09 +0000 (0:00:00.269) 0:01:01.583 ******** 2026-03-26 02:48:17.011729 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-03-26 02:48:17.011740 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-03-26 02:48:17.011752 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-03-26 02:48:17.011763 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-03-26 02:48:17.011774 | orchestrator | 2026-03-26 02:48:17.011785 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-26 02:48:17.011795 | orchestrator | Thursday 26 March 2026 02:48:10 +0000 (0:00:00.720) 0:01:02.304 ******** 2026-03-26 02:48:17.011805 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:48:17.011816 | orchestrator | 2026-03-26 02:48:17.011827 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-26 02:48:17.011837 | orchestrator | Thursday 26 March 2026 02:48:10 +0000 (0:00:00.227) 0:01:02.531 ******** 2026-03-26 02:48:17.011848 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:48:17.011859 | orchestrator | 2026-03-26 02:48:17.011869 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-26 02:48:17.011879 | orchestrator | Thursday 26 March 2026 02:48:11 +0000 (0:00:00.217) 0:01:02.749 ******** 2026-03-26 02:48:17.011889 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:48:17.011900 | orchestrator | 2026-03-26 02:48:17.011910 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-26 02:48:17.011920 | orchestrator | Thursday 26 March 2026 02:48:11 +0000 (0:00:00.233) 0:01:02.983 ******** 2026-03-26 02:48:17.011931 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:48:17.011941 | orchestrator | 2026-03-26 02:48:17.011953 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-03-26 02:48:17.011963 | orchestrator | Thursday 26 March 2026 02:48:11 +0000 (0:00:00.221) 0:01:03.205 ******** 2026-03-26 02:48:17.011974 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:48:17.011984 | orchestrator | 2026-03-26 02:48:17.012007 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-03-26 02:48:17.012018 | orchestrator | Thursday 26 March 2026 02:48:11 +0000 (0:00:00.145) 0:01:03.350 ******** 2026-03-26 02:48:17.012030 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '83c4def8-4703-5f7c-9549-7666ff9f2b66'}}) 2026-03-26 02:48:17.012042 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1fd8de68-da37-5e01-9bf2-5a04fcdcd771'}}) 2026-03-26 02:48:17.012051 | orchestrator | 2026-03-26 02:48:17.012061 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-03-26 02:48:17.012072 | orchestrator | Thursday 26 March 2026 02:48:11 +0000 (0:00:00.178) 0:01:03.529 ******** 2026-03-26 02:48:17.012083 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-83c4def8-4703-5f7c-9549-7666ff9f2b66', 'data_vg': 'ceph-83c4def8-4703-5f7c-9549-7666ff9f2b66'}) 2026-03-26 02:48:17.012095 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-1fd8de68-da37-5e01-9bf2-5a04fcdcd771', 'data_vg': 'ceph-1fd8de68-da37-5e01-9bf2-5a04fcdcd771'}) 2026-03-26 02:48:17.012105 | orchestrator | 2026-03-26 02:48:17.012117 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-03-26 02:48:17.012150 | orchestrator | Thursday 26 March 2026 02:48:13 +0000 (0:00:01.905) 0:01:05.434 ******** 2026-03-26 02:48:17.012163 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-83c4def8-4703-5f7c-9549-7666ff9f2b66', 'data_vg': 'ceph-83c4def8-4703-5f7c-9549-7666ff9f2b66'})  2026-03-26 02:48:17.012175 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1fd8de68-da37-5e01-9bf2-5a04fcdcd771', 'data_vg': 'ceph-1fd8de68-da37-5e01-9bf2-5a04fcdcd771'})  2026-03-26 02:48:17.012186 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:48:17.012196 | orchestrator | 2026-03-26 02:48:17.012216 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-03-26 02:48:17.012226 | orchestrator | Thursday 26 March 2026 02:48:14 +0000 (0:00:00.418) 0:01:05.852 ******** 2026-03-26 02:48:17.012236 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-83c4def8-4703-5f7c-9549-7666ff9f2b66', 'data_vg': 'ceph-83c4def8-4703-5f7c-9549-7666ff9f2b66'}) 2026-03-26 02:48:17.012246 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-1fd8de68-da37-5e01-9bf2-5a04fcdcd771', 'data_vg': 'ceph-1fd8de68-da37-5e01-9bf2-5a04fcdcd771'}) 2026-03-26 02:48:17.012255 | orchestrator | 2026-03-26 02:48:17.012269 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-03-26 02:48:17.012310 | orchestrator | Thursday 26 March 2026 02:48:15 +0000 (0:00:01.377) 0:01:07.230 ******** 2026-03-26 02:48:17.012321 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-83c4def8-4703-5f7c-9549-7666ff9f2b66', 'data_vg': 'ceph-83c4def8-4703-5f7c-9549-7666ff9f2b66'})  2026-03-26 02:48:17.012332 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1fd8de68-da37-5e01-9bf2-5a04fcdcd771', 'data_vg': 'ceph-1fd8de68-da37-5e01-9bf2-5a04fcdcd771'})  2026-03-26 02:48:17.012342 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:48:17.012353 | orchestrator | 2026-03-26 02:48:17.012363 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-03-26 02:48:17.012375 | orchestrator | Thursday 26 March 2026 02:48:15 +0000 (0:00:00.180) 0:01:07.410 ******** 2026-03-26 02:48:17.012384 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:48:17.012394 | orchestrator | 2026-03-26 02:48:17.012404 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-03-26 02:48:17.012414 | orchestrator | Thursday 26 March 2026 02:48:15 +0000 (0:00:00.166) 0:01:07.576 ******** 2026-03-26 02:48:17.012424 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-83c4def8-4703-5f7c-9549-7666ff9f2b66', 'data_vg': 'ceph-83c4def8-4703-5f7c-9549-7666ff9f2b66'})  2026-03-26 02:48:17.012435 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1fd8de68-da37-5e01-9bf2-5a04fcdcd771', 'data_vg': 'ceph-1fd8de68-da37-5e01-9bf2-5a04fcdcd771'})  2026-03-26 02:48:17.012458 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:48:17.012468 | orchestrator | 2026-03-26 02:48:17.012478 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-03-26 02:48:17.012489 | orchestrator | Thursday 26 March 2026 02:48:16 +0000 (0:00:00.202) 0:01:07.779 ******** 2026-03-26 02:48:17.012500 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:48:17.012511 | orchestrator | 2026-03-26 02:48:17.012522 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-03-26 02:48:17.012532 | orchestrator | Thursday 26 March 2026 02:48:16 +0000 (0:00:00.166) 0:01:07.945 ******** 2026-03-26 02:48:17.012543 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-83c4def8-4703-5f7c-9549-7666ff9f2b66', 'data_vg': 'ceph-83c4def8-4703-5f7c-9549-7666ff9f2b66'})  2026-03-26 02:48:17.012554 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1fd8de68-da37-5e01-9bf2-5a04fcdcd771', 'data_vg': 'ceph-1fd8de68-da37-5e01-9bf2-5a04fcdcd771'})  2026-03-26 02:48:17.012564 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:48:17.012574 | orchestrator | 2026-03-26 02:48:17.012584 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-03-26 02:48:17.012595 | orchestrator | Thursday 26 March 2026 02:48:16 +0000 (0:00:00.163) 0:01:08.109 ******** 2026-03-26 02:48:17.012605 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:48:17.012614 | orchestrator | 2026-03-26 02:48:17.012624 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-03-26 02:48:17.012633 | orchestrator | Thursday 26 March 2026 02:48:16 +0000 (0:00:00.153) 0:01:08.262 ******** 2026-03-26 02:48:17.012643 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-83c4def8-4703-5f7c-9549-7666ff9f2b66', 'data_vg': 'ceph-83c4def8-4703-5f7c-9549-7666ff9f2b66'})  2026-03-26 02:48:17.012653 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1fd8de68-da37-5e01-9bf2-5a04fcdcd771', 'data_vg': 'ceph-1fd8de68-da37-5e01-9bf2-5a04fcdcd771'})  2026-03-26 02:48:17.012664 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:48:17.012674 | orchestrator | 2026-03-26 02:48:17.012685 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-03-26 02:48:17.012695 | orchestrator | Thursday 26 March 2026 02:48:16 +0000 (0:00:00.161) 0:01:08.424 ******** 2026-03-26 02:48:17.012705 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:48:17.012715 | orchestrator | 2026-03-26 02:48:17.012725 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-03-26 02:48:17.012736 | orchestrator | Thursday 26 March 2026 02:48:16 +0000 (0:00:00.153) 0:01:08.577 ******** 2026-03-26 02:48:17.012760 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-83c4def8-4703-5f7c-9549-7666ff9f2b66', 'data_vg': 'ceph-83c4def8-4703-5f7c-9549-7666ff9f2b66'})  2026-03-26 02:48:23.969782 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1fd8de68-da37-5e01-9bf2-5a04fcdcd771', 'data_vg': 'ceph-1fd8de68-da37-5e01-9bf2-5a04fcdcd771'})  2026-03-26 02:48:23.969904 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:48:23.969917 | orchestrator | 2026-03-26 02:48:23.969927 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-03-26 02:48:23.969937 | orchestrator | Thursday 26 March 2026 02:48:17 +0000 (0:00:00.170) 0:01:08.747 ******** 2026-03-26 02:48:23.969957 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-83c4def8-4703-5f7c-9549-7666ff9f2b66', 'data_vg': 'ceph-83c4def8-4703-5f7c-9549-7666ff9f2b66'})  2026-03-26 02:48:23.969966 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1fd8de68-da37-5e01-9bf2-5a04fcdcd771', 'data_vg': 'ceph-1fd8de68-da37-5e01-9bf2-5a04fcdcd771'})  2026-03-26 02:48:23.969974 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:48:23.969991 | orchestrator | 2026-03-26 02:48:23.970072 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-03-26 02:48:23.970083 | orchestrator | Thursday 26 March 2026 02:48:17 +0000 (0:00:00.183) 0:01:08.931 ******** 2026-03-26 02:48:23.970112 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-83c4def8-4703-5f7c-9549-7666ff9f2b66', 'data_vg': 'ceph-83c4def8-4703-5f7c-9549-7666ff9f2b66'})  2026-03-26 02:48:23.970120 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1fd8de68-da37-5e01-9bf2-5a04fcdcd771', 'data_vg': 'ceph-1fd8de68-da37-5e01-9bf2-5a04fcdcd771'})  2026-03-26 02:48:23.970128 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:48:23.970134 | orchestrator | 2026-03-26 02:48:23.970144 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-03-26 02:48:23.970151 | orchestrator | Thursday 26 March 2026 02:48:17 +0000 (0:00:00.416) 0:01:09.348 ******** 2026-03-26 02:48:23.970158 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:48:23.970165 | orchestrator | 2026-03-26 02:48:23.970173 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-03-26 02:48:23.970180 | orchestrator | Thursday 26 March 2026 02:48:17 +0000 (0:00:00.155) 0:01:09.504 ******** 2026-03-26 02:48:23.970187 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:48:23.970194 | orchestrator | 2026-03-26 02:48:23.970200 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-03-26 02:48:23.970205 | orchestrator | Thursday 26 March 2026 02:48:17 +0000 (0:00:00.145) 0:01:09.649 ******** 2026-03-26 02:48:23.970211 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:48:23.970217 | orchestrator | 2026-03-26 02:48:23.970222 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-03-26 02:48:23.970228 | orchestrator | Thursday 26 March 2026 02:48:18 +0000 (0:00:00.155) 0:01:09.804 ******** 2026-03-26 02:48:23.970234 | orchestrator | ok: [testbed-node-5] => { 2026-03-26 02:48:23.970241 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-03-26 02:48:23.970248 | orchestrator | } 2026-03-26 02:48:23.970255 | orchestrator | 2026-03-26 02:48:23.970263 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-03-26 02:48:23.970270 | orchestrator | Thursday 26 March 2026 02:48:18 +0000 (0:00:00.224) 0:01:10.029 ******** 2026-03-26 02:48:23.970277 | orchestrator | ok: [testbed-node-5] => { 2026-03-26 02:48:23.970284 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-03-26 02:48:23.970336 | orchestrator | } 2026-03-26 02:48:23.970345 | orchestrator | 2026-03-26 02:48:23.970352 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-03-26 02:48:23.970360 | orchestrator | Thursday 26 March 2026 02:48:18 +0000 (0:00:00.176) 0:01:10.206 ******** 2026-03-26 02:48:23.970368 | orchestrator | ok: [testbed-node-5] => { 2026-03-26 02:48:23.970377 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-03-26 02:48:23.970385 | orchestrator | } 2026-03-26 02:48:23.970392 | orchestrator | 2026-03-26 02:48:23.970400 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-03-26 02:48:23.970409 | orchestrator | Thursday 26 March 2026 02:48:18 +0000 (0:00:00.148) 0:01:10.354 ******** 2026-03-26 02:48:23.970416 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:48:23.970425 | orchestrator | 2026-03-26 02:48:23.970433 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-03-26 02:48:23.970441 | orchestrator | Thursday 26 March 2026 02:48:19 +0000 (0:00:00.528) 0:01:10.883 ******** 2026-03-26 02:48:23.970450 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:48:23.970457 | orchestrator | 2026-03-26 02:48:23.970466 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-03-26 02:48:23.970473 | orchestrator | Thursday 26 March 2026 02:48:19 +0000 (0:00:00.541) 0:01:11.424 ******** 2026-03-26 02:48:23.970479 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:48:23.970485 | orchestrator | 2026-03-26 02:48:23.970491 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-03-26 02:48:23.970498 | orchestrator | Thursday 26 March 2026 02:48:20 +0000 (0:00:00.518) 0:01:11.943 ******** 2026-03-26 02:48:23.970506 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:48:23.970514 | orchestrator | 2026-03-26 02:48:23.970522 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-03-26 02:48:23.970538 | orchestrator | Thursday 26 March 2026 02:48:20 +0000 (0:00:00.157) 0:01:12.100 ******** 2026-03-26 02:48:23.970546 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:48:23.970554 | orchestrator | 2026-03-26 02:48:23.970562 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-03-26 02:48:23.970570 | orchestrator | Thursday 26 March 2026 02:48:20 +0000 (0:00:00.131) 0:01:12.232 ******** 2026-03-26 02:48:23.970579 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:48:23.970586 | orchestrator | 2026-03-26 02:48:23.970594 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-03-26 02:48:23.970602 | orchestrator | Thursday 26 March 2026 02:48:20 +0000 (0:00:00.384) 0:01:12.616 ******** 2026-03-26 02:48:23.970610 | orchestrator | ok: [testbed-node-5] => { 2026-03-26 02:48:23.970618 | orchestrator |  "vgs_report": { 2026-03-26 02:48:23.970627 | orchestrator |  "vg": [] 2026-03-26 02:48:23.970654 | orchestrator |  } 2026-03-26 02:48:23.970663 | orchestrator | } 2026-03-26 02:48:23.970671 | orchestrator | 2026-03-26 02:48:23.970679 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-03-26 02:48:23.970686 | orchestrator | Thursday 26 March 2026 02:48:21 +0000 (0:00:00.164) 0:01:12.780 ******** 2026-03-26 02:48:23.970692 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:48:23.970699 | orchestrator | 2026-03-26 02:48:23.970706 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-03-26 02:48:23.970713 | orchestrator | Thursday 26 March 2026 02:48:21 +0000 (0:00:00.173) 0:01:12.953 ******** 2026-03-26 02:48:23.970727 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:48:23.970734 | orchestrator | 2026-03-26 02:48:23.970741 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-03-26 02:48:23.970748 | orchestrator | Thursday 26 March 2026 02:48:21 +0000 (0:00:00.156) 0:01:13.110 ******** 2026-03-26 02:48:23.970755 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:48:23.970762 | orchestrator | 2026-03-26 02:48:23.970769 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-03-26 02:48:23.970776 | orchestrator | Thursday 26 March 2026 02:48:21 +0000 (0:00:00.138) 0:01:13.249 ******** 2026-03-26 02:48:23.970784 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:48:23.970791 | orchestrator | 2026-03-26 02:48:23.970798 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-03-26 02:48:23.970805 | orchestrator | Thursday 26 March 2026 02:48:21 +0000 (0:00:00.171) 0:01:13.421 ******** 2026-03-26 02:48:23.970813 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:48:23.970820 | orchestrator | 2026-03-26 02:48:23.970827 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-03-26 02:48:23.970834 | orchestrator | Thursday 26 March 2026 02:48:21 +0000 (0:00:00.166) 0:01:13.587 ******** 2026-03-26 02:48:23.970841 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:48:23.970849 | orchestrator | 2026-03-26 02:48:23.970857 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-03-26 02:48:23.970865 | orchestrator | Thursday 26 March 2026 02:48:21 +0000 (0:00:00.127) 0:01:13.714 ******** 2026-03-26 02:48:23.970872 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:48:23.970879 | orchestrator | 2026-03-26 02:48:23.970886 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-03-26 02:48:23.970893 | orchestrator | Thursday 26 March 2026 02:48:22 +0000 (0:00:00.152) 0:01:13.867 ******** 2026-03-26 02:48:23.970901 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:48:23.970908 | orchestrator | 2026-03-26 02:48:23.970915 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-03-26 02:48:23.970922 | orchestrator | Thursday 26 March 2026 02:48:22 +0000 (0:00:00.136) 0:01:14.003 ******** 2026-03-26 02:48:23.970929 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:48:23.970936 | orchestrator | 2026-03-26 02:48:23.970943 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-03-26 02:48:23.970950 | orchestrator | Thursday 26 March 2026 02:48:22 +0000 (0:00:00.151) 0:01:14.154 ******** 2026-03-26 02:48:23.970964 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:48:23.970971 | orchestrator | 2026-03-26 02:48:23.970978 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-03-26 02:48:23.970985 | orchestrator | Thursday 26 March 2026 02:48:22 +0000 (0:00:00.157) 0:01:14.312 ******** 2026-03-26 02:48:23.970993 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:48:23.971000 | orchestrator | 2026-03-26 02:48:23.971007 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-03-26 02:48:23.971014 | orchestrator | Thursday 26 March 2026 02:48:22 +0000 (0:00:00.390) 0:01:14.702 ******** 2026-03-26 02:48:23.971021 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:48:23.971029 | orchestrator | 2026-03-26 02:48:23.971036 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-03-26 02:48:23.971043 | orchestrator | Thursday 26 March 2026 02:48:23 +0000 (0:00:00.158) 0:01:14.861 ******** 2026-03-26 02:48:23.971050 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:48:23.971058 | orchestrator | 2026-03-26 02:48:23.971065 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-03-26 02:48:23.971072 | orchestrator | Thursday 26 March 2026 02:48:23 +0000 (0:00:00.160) 0:01:15.021 ******** 2026-03-26 02:48:23.971079 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:48:23.971086 | orchestrator | 2026-03-26 02:48:23.971093 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-03-26 02:48:23.971101 | orchestrator | Thursday 26 March 2026 02:48:23 +0000 (0:00:00.153) 0:01:15.175 ******** 2026-03-26 02:48:23.971107 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-83c4def8-4703-5f7c-9549-7666ff9f2b66', 'data_vg': 'ceph-83c4def8-4703-5f7c-9549-7666ff9f2b66'})  2026-03-26 02:48:23.971114 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1fd8de68-da37-5e01-9bf2-5a04fcdcd771', 'data_vg': 'ceph-1fd8de68-da37-5e01-9bf2-5a04fcdcd771'})  2026-03-26 02:48:23.971120 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:48:23.971125 | orchestrator | 2026-03-26 02:48:23.971131 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-03-26 02:48:23.971137 | orchestrator | Thursday 26 March 2026 02:48:23 +0000 (0:00:00.174) 0:01:15.349 ******** 2026-03-26 02:48:23.971142 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-83c4def8-4703-5f7c-9549-7666ff9f2b66', 'data_vg': 'ceph-83c4def8-4703-5f7c-9549-7666ff9f2b66'})  2026-03-26 02:48:23.971148 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1fd8de68-da37-5e01-9bf2-5a04fcdcd771', 'data_vg': 'ceph-1fd8de68-da37-5e01-9bf2-5a04fcdcd771'})  2026-03-26 02:48:23.971154 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:48:23.971160 | orchestrator | 2026-03-26 02:48:23.971166 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-03-26 02:48:23.971172 | orchestrator | Thursday 26 March 2026 02:48:23 +0000 (0:00:00.184) 0:01:15.534 ******** 2026-03-26 02:48:23.971186 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-83c4def8-4703-5f7c-9549-7666ff9f2b66', 'data_vg': 'ceph-83c4def8-4703-5f7c-9549-7666ff9f2b66'})  2026-03-26 02:48:27.222376 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1fd8de68-da37-5e01-9bf2-5a04fcdcd771', 'data_vg': 'ceph-1fd8de68-da37-5e01-9bf2-5a04fcdcd771'})  2026-03-26 02:48:27.222465 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:48:27.222476 | orchestrator | 2026-03-26 02:48:27.222496 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-03-26 02:48:27.222502 | orchestrator | Thursday 26 March 2026 02:48:23 +0000 (0:00:00.172) 0:01:15.706 ******** 2026-03-26 02:48:27.222506 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-83c4def8-4703-5f7c-9549-7666ff9f2b66', 'data_vg': 'ceph-83c4def8-4703-5f7c-9549-7666ff9f2b66'})  2026-03-26 02:48:27.222511 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1fd8de68-da37-5e01-9bf2-5a04fcdcd771', 'data_vg': 'ceph-1fd8de68-da37-5e01-9bf2-5a04fcdcd771'})  2026-03-26 02:48:27.222528 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:48:27.222532 | orchestrator | 2026-03-26 02:48:27.222536 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-03-26 02:48:27.222540 | orchestrator | Thursday 26 March 2026 02:48:24 +0000 (0:00:00.191) 0:01:15.898 ******** 2026-03-26 02:48:27.222544 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-83c4def8-4703-5f7c-9549-7666ff9f2b66', 'data_vg': 'ceph-83c4def8-4703-5f7c-9549-7666ff9f2b66'})  2026-03-26 02:48:27.222547 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1fd8de68-da37-5e01-9bf2-5a04fcdcd771', 'data_vg': 'ceph-1fd8de68-da37-5e01-9bf2-5a04fcdcd771'})  2026-03-26 02:48:27.222551 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:48:27.222555 | orchestrator | 2026-03-26 02:48:27.222559 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-03-26 02:48:27.222563 | orchestrator | Thursday 26 March 2026 02:48:24 +0000 (0:00:00.179) 0:01:16.077 ******** 2026-03-26 02:48:27.222567 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-83c4def8-4703-5f7c-9549-7666ff9f2b66', 'data_vg': 'ceph-83c4def8-4703-5f7c-9549-7666ff9f2b66'})  2026-03-26 02:48:27.222570 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1fd8de68-da37-5e01-9bf2-5a04fcdcd771', 'data_vg': 'ceph-1fd8de68-da37-5e01-9bf2-5a04fcdcd771'})  2026-03-26 02:48:27.222574 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:48:27.222578 | orchestrator | 2026-03-26 02:48:27.222582 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-03-26 02:48:27.222586 | orchestrator | Thursday 26 March 2026 02:48:24 +0000 (0:00:00.162) 0:01:16.240 ******** 2026-03-26 02:48:27.222589 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-83c4def8-4703-5f7c-9549-7666ff9f2b66', 'data_vg': 'ceph-83c4def8-4703-5f7c-9549-7666ff9f2b66'})  2026-03-26 02:48:27.222593 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1fd8de68-da37-5e01-9bf2-5a04fcdcd771', 'data_vg': 'ceph-1fd8de68-da37-5e01-9bf2-5a04fcdcd771'})  2026-03-26 02:48:27.222597 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:48:27.222601 | orchestrator | 2026-03-26 02:48:27.222605 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-03-26 02:48:27.222608 | orchestrator | Thursday 26 March 2026 02:48:24 +0000 (0:00:00.199) 0:01:16.440 ******** 2026-03-26 02:48:27.222612 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-83c4def8-4703-5f7c-9549-7666ff9f2b66', 'data_vg': 'ceph-83c4def8-4703-5f7c-9549-7666ff9f2b66'})  2026-03-26 02:48:27.222616 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1fd8de68-da37-5e01-9bf2-5a04fcdcd771', 'data_vg': 'ceph-1fd8de68-da37-5e01-9bf2-5a04fcdcd771'})  2026-03-26 02:48:27.222620 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:48:27.222624 | orchestrator | 2026-03-26 02:48:27.222627 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-03-26 02:48:27.222631 | orchestrator | Thursday 26 March 2026 02:48:24 +0000 (0:00:00.167) 0:01:16.607 ******** 2026-03-26 02:48:27.222635 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:48:27.222640 | orchestrator | 2026-03-26 02:48:27.222644 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-03-26 02:48:27.222648 | orchestrator | Thursday 26 March 2026 02:48:25 +0000 (0:00:00.744) 0:01:17.352 ******** 2026-03-26 02:48:27.222651 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:48:27.222655 | orchestrator | 2026-03-26 02:48:27.222659 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-03-26 02:48:27.222663 | orchestrator | Thursday 26 March 2026 02:48:26 +0000 (0:00:00.515) 0:01:17.868 ******** 2026-03-26 02:48:27.222667 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:48:27.222671 | orchestrator | 2026-03-26 02:48:27.222675 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-03-26 02:48:27.222678 | orchestrator | Thursday 26 March 2026 02:48:26 +0000 (0:00:00.185) 0:01:18.054 ******** 2026-03-26 02:48:27.222686 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-1fd8de68-da37-5e01-9bf2-5a04fcdcd771', 'vg_name': 'ceph-1fd8de68-da37-5e01-9bf2-5a04fcdcd771'}) 2026-03-26 02:48:27.222691 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-83c4def8-4703-5f7c-9549-7666ff9f2b66', 'vg_name': 'ceph-83c4def8-4703-5f7c-9549-7666ff9f2b66'}) 2026-03-26 02:48:27.222695 | orchestrator | 2026-03-26 02:48:27.222698 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-03-26 02:48:27.222702 | orchestrator | Thursday 26 March 2026 02:48:26 +0000 (0:00:00.185) 0:01:18.239 ******** 2026-03-26 02:48:27.222719 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-83c4def8-4703-5f7c-9549-7666ff9f2b66', 'data_vg': 'ceph-83c4def8-4703-5f7c-9549-7666ff9f2b66'})  2026-03-26 02:48:27.222726 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1fd8de68-da37-5e01-9bf2-5a04fcdcd771', 'data_vg': 'ceph-1fd8de68-da37-5e01-9bf2-5a04fcdcd771'})  2026-03-26 02:48:27.222730 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:48:27.222733 | orchestrator | 2026-03-26 02:48:27.222737 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-03-26 02:48:27.222741 | orchestrator | Thursday 26 March 2026 02:48:26 +0000 (0:00:00.187) 0:01:18.427 ******** 2026-03-26 02:48:27.222745 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-83c4def8-4703-5f7c-9549-7666ff9f2b66', 'data_vg': 'ceph-83c4def8-4703-5f7c-9549-7666ff9f2b66'})  2026-03-26 02:48:27.222749 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1fd8de68-da37-5e01-9bf2-5a04fcdcd771', 'data_vg': 'ceph-1fd8de68-da37-5e01-9bf2-5a04fcdcd771'})  2026-03-26 02:48:27.222752 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:48:27.222756 | orchestrator | 2026-03-26 02:48:27.222760 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-03-26 02:48:27.222764 | orchestrator | Thursday 26 March 2026 02:48:26 +0000 (0:00:00.175) 0:01:18.602 ******** 2026-03-26 02:48:27.222768 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-83c4def8-4703-5f7c-9549-7666ff9f2b66', 'data_vg': 'ceph-83c4def8-4703-5f7c-9549-7666ff9f2b66'})  2026-03-26 02:48:27.222771 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1fd8de68-da37-5e01-9bf2-5a04fcdcd771', 'data_vg': 'ceph-1fd8de68-da37-5e01-9bf2-5a04fcdcd771'})  2026-03-26 02:48:27.222775 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:48:27.222779 | orchestrator | 2026-03-26 02:48:27.222783 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-03-26 02:48:27.222787 | orchestrator | Thursday 26 March 2026 02:48:27 +0000 (0:00:00.184) 0:01:18.786 ******** 2026-03-26 02:48:27.222791 | orchestrator | ok: [testbed-node-5] => { 2026-03-26 02:48:27.222794 | orchestrator |  "lvm_report": { 2026-03-26 02:48:27.222798 | orchestrator |  "lv": [ 2026-03-26 02:48:27.222802 | orchestrator |  { 2026-03-26 02:48:27.222807 | orchestrator |  "lv_name": "osd-block-1fd8de68-da37-5e01-9bf2-5a04fcdcd771", 2026-03-26 02:48:27.222811 | orchestrator |  "vg_name": "ceph-1fd8de68-da37-5e01-9bf2-5a04fcdcd771" 2026-03-26 02:48:27.222815 | orchestrator |  }, 2026-03-26 02:48:27.222819 | orchestrator |  { 2026-03-26 02:48:27.222822 | orchestrator |  "lv_name": "osd-block-83c4def8-4703-5f7c-9549-7666ff9f2b66", 2026-03-26 02:48:27.222826 | orchestrator |  "vg_name": "ceph-83c4def8-4703-5f7c-9549-7666ff9f2b66" 2026-03-26 02:48:27.222830 | orchestrator |  } 2026-03-26 02:48:27.222834 | orchestrator |  ], 2026-03-26 02:48:27.222838 | orchestrator |  "pv": [ 2026-03-26 02:48:27.222841 | orchestrator |  { 2026-03-26 02:48:27.222845 | orchestrator |  "pv_name": "/dev/sdb", 2026-03-26 02:48:27.222849 | orchestrator |  "vg_name": "ceph-83c4def8-4703-5f7c-9549-7666ff9f2b66" 2026-03-26 02:48:27.222853 | orchestrator |  }, 2026-03-26 02:48:27.222857 | orchestrator |  { 2026-03-26 02:48:27.222861 | orchestrator |  "pv_name": "/dev/sdc", 2026-03-26 02:48:27.222877 | orchestrator |  "vg_name": "ceph-1fd8de68-da37-5e01-9bf2-5a04fcdcd771" 2026-03-26 02:48:27.222881 | orchestrator |  } 2026-03-26 02:48:27.222885 | orchestrator |  ] 2026-03-26 02:48:27.222888 | orchestrator |  } 2026-03-26 02:48:27.222892 | orchestrator | } 2026-03-26 02:48:27.222896 | orchestrator | 2026-03-26 02:48:27.222900 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-26 02:48:27.222904 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-03-26 02:48:27.222908 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-03-26 02:48:27.222913 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-03-26 02:48:27.222917 | orchestrator | 2026-03-26 02:48:27.222922 | orchestrator | 2026-03-26 02:48:27.222926 | orchestrator | 2026-03-26 02:48:27.222930 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-26 02:48:27.222934 | orchestrator | Thursday 26 March 2026 02:48:27 +0000 (0:00:00.154) 0:01:18.940 ******** 2026-03-26 02:48:27.222939 | orchestrator | =============================================================================== 2026-03-26 02:48:27.222943 | orchestrator | Create block VGs -------------------------------------------------------- 5.70s 2026-03-26 02:48:27.222947 | orchestrator | Create block LVs -------------------------------------------------------- 4.18s 2026-03-26 02:48:27.222951 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.77s 2026-03-26 02:48:27.222956 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.75s 2026-03-26 02:48:27.222960 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.61s 2026-03-26 02:48:27.222964 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.54s 2026-03-26 02:48:27.222969 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.54s 2026-03-26 02:48:27.222973 | orchestrator | Add known links to the list of available block devices ------------------ 1.51s 2026-03-26 02:48:27.222980 | orchestrator | Add known partitions to the list of available block devices ------------- 1.36s 2026-03-26 02:48:27.667744 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 1.31s 2026-03-26 02:48:27.667841 | orchestrator | Print LVM report data --------------------------------------------------- 1.05s 2026-03-26 02:48:27.667854 | orchestrator | Print 'Create block VGs' ------------------------------------------------ 1.05s 2026-03-26 02:48:27.667881 | orchestrator | Add known links to the list of available block devices ------------------ 0.98s 2026-03-26 02:48:27.667889 | orchestrator | Combine JSON from _db/wal/db_wal_vgs_cmd_output ------------------------- 0.97s 2026-03-26 02:48:27.667897 | orchestrator | Add known links to the list of available block devices ------------------ 0.93s 2026-03-26 02:48:27.667904 | orchestrator | Get initial list of available block devices ----------------------------- 0.82s 2026-03-26 02:48:27.667912 | orchestrator | Count OSDs put on ceph_db_wal_devices defined in lvm_volumes ------------ 0.82s 2026-03-26 02:48:27.667919 | orchestrator | Create DB LVs for ceph_db_wal_devices ----------------------------------- 0.79s 2026-03-26 02:48:27.667927 | orchestrator | Count OSDs put on ceph_db_devices defined in lvm_volumes ---------------- 0.79s 2026-03-26 02:48:27.667935 | orchestrator | Create WAL LVs for ceph_db_wal_devices ---------------------------------- 0.77s 2026-03-26 02:48:40.164833 | orchestrator | 2026-03-26 02:48:40 | INFO  | Task 1dc7d21e-cb25-4027-b145-729ca3421def (facts) was prepared for execution. 2026-03-26 02:48:40.164922 | orchestrator | 2026-03-26 02:48:40 | INFO  | It takes a moment until task 1dc7d21e-cb25-4027-b145-729ca3421def (facts) has been started and output is visible here. 2026-03-26 02:48:53.199633 | orchestrator | 2026-03-26 02:48:53.199784 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-03-26 02:48:53.199838 | orchestrator | 2026-03-26 02:48:53.199851 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-26 02:48:53.199863 | orchestrator | Thursday 26 March 2026 02:48:44 +0000 (0:00:00.280) 0:00:00.280 ******** 2026-03-26 02:48:53.199875 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:48:53.199887 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:48:53.199898 | orchestrator | ok: [testbed-manager] 2026-03-26 02:48:53.199909 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:48:53.199920 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:48:53.199931 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:48:53.199942 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:48:53.199952 | orchestrator | 2026-03-26 02:48:53.199964 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-26 02:48:53.199975 | orchestrator | Thursday 26 March 2026 02:48:45 +0000 (0:00:01.150) 0:00:01.431 ******** 2026-03-26 02:48:53.199986 | orchestrator | skipping: [testbed-manager] 2026-03-26 02:48:53.199997 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:48:53.200008 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:48:53.200019 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:48:53.200030 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:48:53.200041 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:48:53.200052 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:48:53.200063 | orchestrator | 2026-03-26 02:48:53.200074 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-26 02:48:53.200085 | orchestrator | 2026-03-26 02:48:53.200096 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-26 02:48:53.200107 | orchestrator | Thursday 26 March 2026 02:48:47 +0000 (0:00:01.369) 0:00:02.800 ******** 2026-03-26 02:48:53.200118 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:48:53.200129 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:48:53.200140 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:48:53.200153 | orchestrator | ok: [testbed-manager] 2026-03-26 02:48:53.200166 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:48:53.200178 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:48:53.200190 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:48:53.200206 | orchestrator | 2026-03-26 02:48:53.200226 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-26 02:48:53.200250 | orchestrator | 2026-03-26 02:48:53.200277 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-26 02:48:53.200295 | orchestrator | Thursday 26 March 2026 02:48:52 +0000 (0:00:04.942) 0:00:07.742 ******** 2026-03-26 02:48:53.200313 | orchestrator | skipping: [testbed-manager] 2026-03-26 02:48:53.200367 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:48:53.200386 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:48:53.200403 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:48:53.200420 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:48:53.200439 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:48:53.200457 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:48:53.200474 | orchestrator | 2026-03-26 02:48:53.200493 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-26 02:48:53.200513 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-26 02:48:53.200533 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-26 02:48:53.200553 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-26 02:48:53.200575 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-26 02:48:53.200594 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-26 02:48:53.200626 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-26 02:48:53.200644 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-26 02:48:53.200670 | orchestrator | 2026-03-26 02:48:53.200692 | orchestrator | 2026-03-26 02:48:53.200709 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-26 02:48:53.200747 | orchestrator | Thursday 26 March 2026 02:48:52 +0000 (0:00:00.595) 0:00:08.338 ******** 2026-03-26 02:48:53.200767 | orchestrator | =============================================================================== 2026-03-26 02:48:53.200787 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.94s 2026-03-26 02:48:53.200808 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.37s 2026-03-26 02:48:53.200827 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.15s 2026-03-26 02:48:53.200845 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.60s 2026-03-26 02:48:55.812723 | orchestrator | 2026-03-26 02:48:55 | INFO  | Task 0afd95ab-3afe-4cf0-a623-ba200f82b57e (ceph) was prepared for execution. 2026-03-26 02:48:55.812828 | orchestrator | 2026-03-26 02:48:55 | INFO  | It takes a moment until task 0afd95ab-3afe-4cf0-a623-ba200f82b57e (ceph) has been started and output is visible here. 2026-03-26 02:49:15.583707 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-26 02:49:15.583800 | orchestrator | 2.16.14 2026-03-26 02:49:15.583811 | orchestrator | 2026-03-26 02:49:15.583818 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2026-03-26 02:49:15.583826 | orchestrator | 2026-03-26 02:49:15.583833 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-26 02:49:15.583840 | orchestrator | Thursday 26 March 2026 02:49:01 +0000 (0:00:00.861) 0:00:00.861 ******** 2026-03-26 02:49:15.583847 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 02:49:15.583854 | orchestrator | 2026-03-26 02:49:15.583861 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-26 02:49:15.583867 | orchestrator | Thursday 26 March 2026 02:49:02 +0000 (0:00:01.228) 0:00:02.090 ******** 2026-03-26 02:49:15.583873 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:49:15.583880 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:49:15.583886 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:49:15.583892 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:49:15.583899 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:49:15.583905 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:49:15.583912 | orchestrator | 2026-03-26 02:49:15.583918 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-26 02:49:15.583925 | orchestrator | Thursday 26 March 2026 02:49:03 +0000 (0:00:01.368) 0:00:03.459 ******** 2026-03-26 02:49:15.583931 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:49:15.583937 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:49:15.583944 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:49:15.583950 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:49:15.583956 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:49:15.583962 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:49:15.583968 | orchestrator | 2026-03-26 02:49:15.583974 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-26 02:49:15.583981 | orchestrator | Thursday 26 March 2026 02:49:04 +0000 (0:00:01.039) 0:00:04.498 ******** 2026-03-26 02:49:15.583987 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:49:15.583993 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:49:15.583999 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:49:15.584005 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:49:15.584031 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:49:15.584037 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:49:15.584044 | orchestrator | 2026-03-26 02:49:15.584050 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-26 02:49:15.584056 | orchestrator | Thursday 26 March 2026 02:49:06 +0000 (0:00:01.044) 0:00:05.542 ******** 2026-03-26 02:49:15.584062 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:49:15.584069 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:49:15.584075 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:49:15.584081 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:49:15.584087 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:49:15.584093 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:49:15.584099 | orchestrator | 2026-03-26 02:49:15.584106 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-26 02:49:15.584112 | orchestrator | Thursday 26 March 2026 02:49:07 +0000 (0:00:01.090) 0:00:06.633 ******** 2026-03-26 02:49:15.584118 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:49:15.584124 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:49:15.584130 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:49:15.584137 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:49:15.584143 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:49:15.584149 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:49:15.584155 | orchestrator | 2026-03-26 02:49:15.584161 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-26 02:49:15.584167 | orchestrator | Thursday 26 March 2026 02:49:07 +0000 (0:00:00.670) 0:00:07.304 ******** 2026-03-26 02:49:15.584176 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:49:15.584185 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:49:15.584192 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:49:15.584198 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:49:15.584204 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:49:15.584210 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:49:15.584216 | orchestrator | 2026-03-26 02:49:15.584222 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-26 02:49:15.584229 | orchestrator | Thursday 26 March 2026 02:49:08 +0000 (0:00:00.857) 0:00:08.161 ******** 2026-03-26 02:49:15.584235 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:49:15.584242 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:49:15.584249 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:49:15.584256 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:49:15.584263 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:49:15.584270 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:49:15.584277 | orchestrator | 2026-03-26 02:49:15.584284 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-26 02:49:15.584291 | orchestrator | Thursday 26 March 2026 02:49:09 +0000 (0:00:00.633) 0:00:08.795 ******** 2026-03-26 02:49:15.584299 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:49:15.584306 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:49:15.584313 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:49:15.584320 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:49:15.584327 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:49:15.584367 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:49:15.584375 | orchestrator | 2026-03-26 02:49:15.584383 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-26 02:49:15.584390 | orchestrator | Thursday 26 March 2026 02:49:10 +0000 (0:00:00.826) 0:00:09.621 ******** 2026-03-26 02:49:15.584397 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-26 02:49:15.584404 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-26 02:49:15.584411 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-26 02:49:15.584418 | orchestrator | 2026-03-26 02:49:15.584425 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-26 02:49:15.584432 | orchestrator | Thursday 26 March 2026 02:49:10 +0000 (0:00:00.714) 0:00:10.336 ******** 2026-03-26 02:49:15.584445 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:49:15.584452 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:49:15.584459 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:49:15.584477 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:49:15.584485 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:49:15.584492 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:49:15.584499 | orchestrator | 2026-03-26 02:49:15.584506 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-26 02:49:15.584513 | orchestrator | Thursday 26 March 2026 02:49:11 +0000 (0:00:00.712) 0:00:11.048 ******** 2026-03-26 02:49:15.584525 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-26 02:49:15.584535 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-26 02:49:15.584549 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-26 02:49:15.584564 | orchestrator | 2026-03-26 02:49:15.584574 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-26 02:49:15.584585 | orchestrator | Thursday 26 March 2026 02:49:14 +0000 (0:00:02.505) 0:00:13.554 ******** 2026-03-26 02:49:15.584597 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-26 02:49:15.584609 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-26 02:49:15.584621 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-26 02:49:15.584632 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:49:15.584641 | orchestrator | 2026-03-26 02:49:15.584647 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-26 02:49:15.584654 | orchestrator | Thursday 26 March 2026 02:49:14 +0000 (0:00:00.438) 0:00:13.992 ******** 2026-03-26 02:49:15.584662 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-26 02:49:15.584672 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-26 02:49:15.584678 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-26 02:49:15.584685 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:49:15.584691 | orchestrator | 2026-03-26 02:49:15.584697 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-26 02:49:15.584704 | orchestrator | Thursday 26 March 2026 02:49:15 +0000 (0:00:00.653) 0:00:14.646 ******** 2026-03-26 02:49:15.584712 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-26 02:49:15.584721 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-26 02:49:15.584728 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-26 02:49:15.584743 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:49:15.584750 | orchestrator | 2026-03-26 02:49:15.584761 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-26 02:49:15.584767 | orchestrator | Thursday 26 March 2026 02:49:15 +0000 (0:00:00.169) 0:00:14.816 ******** 2026-03-26 02:49:15.584783 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-26 02:49:12.428284', 'end': '2026-03-26 02:49:12.465831', 'delta': '0:00:00.037547', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-26 02:49:25.861526 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-26 02:49:13.089373', 'end': '2026-03-26 02:49:13.140350', 'delta': '0:00:00.050977', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-26 02:49:25.861638 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-26 02:49:13.625707', 'end': '2026-03-26 02:49:13.679965', 'delta': '0:00:00.054258', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-26 02:49:25.861654 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:49:25.861665 | orchestrator | 2026-03-26 02:49:25.861675 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-26 02:49:25.861686 | orchestrator | Thursday 26 March 2026 02:49:15 +0000 (0:00:00.270) 0:00:15.086 ******** 2026-03-26 02:49:25.861695 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:49:25.861704 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:49:25.861714 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:49:25.861722 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:49:25.861730 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:49:25.861739 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:49:25.861748 | orchestrator | 2026-03-26 02:49:25.861756 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-26 02:49:25.861765 | orchestrator | Thursday 26 March 2026 02:49:16 +0000 (0:00:00.749) 0:00:15.836 ******** 2026-03-26 02:49:25.861774 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-26 02:49:25.861783 | orchestrator | 2026-03-26 02:49:25.861791 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-26 02:49:25.861800 | orchestrator | Thursday 26 March 2026 02:49:17 +0000 (0:00:01.088) 0:00:16.925 ******** 2026-03-26 02:49:25.861834 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:49:25.861843 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:49:25.861851 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:49:25.861859 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:49:25.861868 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:49:25.861877 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:49:25.861885 | orchestrator | 2026-03-26 02:49:25.861894 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-26 02:49:25.861903 | orchestrator | Thursday 26 March 2026 02:49:18 +0000 (0:00:00.632) 0:00:17.558 ******** 2026-03-26 02:49:25.861912 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:49:25.861920 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:49:25.861930 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:49:25.861938 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:49:25.861946 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:49:25.861955 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:49:25.861965 | orchestrator | 2026-03-26 02:49:25.861974 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-26 02:49:25.861984 | orchestrator | Thursday 26 March 2026 02:49:19 +0000 (0:00:01.181) 0:00:18.739 ******** 2026-03-26 02:49:25.861992 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:49:25.862000 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:49:25.862009 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:49:25.862073 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:49:25.862082 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:49:25.862107 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:49:25.862117 | orchestrator | 2026-03-26 02:49:25.862188 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-26 02:49:25.862200 | orchestrator | Thursday 26 March 2026 02:49:19 +0000 (0:00:00.710) 0:00:19.450 ******** 2026-03-26 02:49:25.862210 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:49:25.862220 | orchestrator | 2026-03-26 02:49:25.862229 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-26 02:49:25.862239 | orchestrator | Thursday 26 March 2026 02:49:20 +0000 (0:00:00.145) 0:00:19.595 ******** 2026-03-26 02:49:25.862248 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:49:25.862259 | orchestrator | 2026-03-26 02:49:25.862269 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-26 02:49:25.862278 | orchestrator | Thursday 26 March 2026 02:49:20 +0000 (0:00:00.234) 0:00:19.829 ******** 2026-03-26 02:49:25.862286 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:49:25.862297 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:49:25.862307 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:49:25.862316 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:49:25.862324 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:49:25.862334 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:49:25.862342 | orchestrator | 2026-03-26 02:49:25.862396 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-26 02:49:25.862406 | orchestrator | Thursday 26 March 2026 02:49:21 +0000 (0:00:00.828) 0:00:20.658 ******** 2026-03-26 02:49:25.862415 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:49:25.862424 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:49:25.862432 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:49:25.862440 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:49:25.862449 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:49:25.862457 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:49:25.862466 | orchestrator | 2026-03-26 02:49:25.862476 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-26 02:49:25.862485 | orchestrator | Thursday 26 March 2026 02:49:21 +0000 (0:00:00.666) 0:00:21.325 ******** 2026-03-26 02:49:25.862493 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:49:25.862503 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:49:25.862512 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:49:25.862533 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:49:25.862542 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:49:25.862551 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:49:25.862560 | orchestrator | 2026-03-26 02:49:25.862569 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-26 02:49:25.862577 | orchestrator | Thursday 26 March 2026 02:49:22 +0000 (0:00:00.836) 0:00:22.162 ******** 2026-03-26 02:49:25.862586 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:49:25.862594 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:49:25.862619 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:49:25.862636 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:49:25.862647 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:49:25.862656 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:49:25.862664 | orchestrator | 2026-03-26 02:49:25.862674 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-26 02:49:25.862684 | orchestrator | Thursday 26 March 2026 02:49:23 +0000 (0:00:00.625) 0:00:22.787 ******** 2026-03-26 02:49:25.862693 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:49:25.862712 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:49:25.862720 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:49:25.862730 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:49:25.862739 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:49:25.862748 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:49:25.862758 | orchestrator | 2026-03-26 02:49:25.862767 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-26 02:49:25.862775 | orchestrator | Thursday 26 March 2026 02:49:24 +0000 (0:00:00.880) 0:00:23.668 ******** 2026-03-26 02:49:25.862784 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:49:25.862793 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:49:25.862802 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:49:25.862810 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:49:25.862819 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:49:25.862827 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:49:25.862835 | orchestrator | 2026-03-26 02:49:25.862844 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-26 02:49:25.862854 | orchestrator | Thursday 26 March 2026 02:49:24 +0000 (0:00:00.647) 0:00:24.315 ******** 2026-03-26 02:49:25.862862 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:49:25.862870 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:49:25.862878 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:49:25.862887 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:49:25.862895 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:49:25.862904 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:49:25.862913 | orchestrator | 2026-03-26 02:49:25.862921 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-26 02:49:25.862930 | orchestrator | Thursday 26 March 2026 02:49:25 +0000 (0:00:00.917) 0:00:25.233 ******** 2026-03-26 02:49:25.862941 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--93e8c9a2--b6ff--5fe0--a79e--2922336c3e0a-osd--block--93e8c9a2--b6ff--5fe0--a79e--2922336c3e0a', 'dm-uuid-LVM-NfuOn4R5AkCZoZBaGfCwjgSejX4qlSlby5xuVgNQ7T0MWashc4xC7nHJ3VUNBCRS'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-26 02:49:25.862963 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e2623153--bc41--510f--8884--ef957bb96082-osd--block--e2623153--bc41--510f--8884--ef957bb96082', 'dm-uuid-LVM-8hKVl461SF70Ai5uMDmNdT5BP20Vvkg8AxHs2aTbdloCZd5zRhurro2iqvFnFzRY'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-26 02:49:25.862993 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-26 02:49:25.997531 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-26 02:49:25.997653 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-26 02:49:25.997669 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-26 02:49:25.997681 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-26 02:49:25.997692 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-26 02:49:25.997702 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-26 02:49:25.997712 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-26 02:49:25.997765 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce600cf2-62c4-44aa-8248-5535335c6519', 'scsi-SQEMU_QEMU_HARDDISK_ce600cf2-62c4-44aa-8248-5535335c6519'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce600cf2-62c4-44aa-8248-5535335c6519-part1', 'scsi-SQEMU_QEMU_HARDDISK_ce600cf2-62c4-44aa-8248-5535335c6519-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce600cf2-62c4-44aa-8248-5535335c6519-part14', 'scsi-SQEMU_QEMU_HARDDISK_ce600cf2-62c4-44aa-8248-5535335c6519-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce600cf2-62c4-44aa-8248-5535335c6519-part15', 'scsi-SQEMU_QEMU_HARDDISK_ce600cf2-62c4-44aa-8248-5535335c6519-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce600cf2-62c4-44aa-8248-5535335c6519-part16', 'scsi-SQEMU_QEMU_HARDDISK_ce600cf2-62c4-44aa-8248-5535335c6519-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-26 02:49:25.997801 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--93e8c9a2--b6ff--5fe0--a79e--2922336c3e0a-osd--block--93e8c9a2--b6ff--5fe0--a79e--2922336c3e0a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-2XKfyD-kvYx-XaUk-IA1D-OFMu-auWL-FeQHCw', 'scsi-0QEMU_QEMU_HARDDISK_d11e4e4a-db1d-44df-8da9-5de7e993dd80', 'scsi-SQEMU_QEMU_HARDDISK_d11e4e4a-db1d-44df-8da9-5de7e993dd80'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-26 02:49:25.997815 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--e2623153--bc41--510f--8884--ef957bb96082-osd--block--e2623153--bc41--510f--8884--ef957bb96082'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-dxNnp3-HdCF-97hz-w17k-bHEu-opcA-g4y34j', 'scsi-0QEMU_QEMU_HARDDISK_863ba5d2-7e2f-4393-95a6-83543745d331', 'scsi-SQEMU_QEMU_HARDDISK_863ba5d2-7e2f-4393-95a6-83543745d331'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-26 02:49:25.997827 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2dae49df-17cb-48b5-9940-ec5e7ec792d8', 'scsi-SQEMU_QEMU_HARDDISK_2dae49df-17cb-48b5-9940-ec5e7ec792d8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-26 02:49:25.997850 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-26-01-38-13-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-26 02:49:25.997869 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a652979e--9f40--503a--bbc8--6de5e605991e-osd--block--a652979e--9f40--503a--bbc8--6de5e605991e', 'dm-uuid-LVM-86WEu6duX2Pejl3asW6viK3fsh4aqvqg2h2U7SLeR6PGwru1xY81U9rrCs8siESG'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-26 02:49:26.103781 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b5eee7c3--8883--5bbe--be5a--75726e822543-osd--block--b5eee7c3--8883--5bbe--be5a--75726e822543', 'dm-uuid-LVM-O1aEkSX5V2TgXKGnqX2peNd9dQhi04NAZJyEqlgfRLjtJKN8JwRgDI1ZPO4R3wgt'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-26 02:49:26.103867 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-26 02:49:26.103879 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-26 02:49:26.103886 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-26 02:49:26.103893 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-26 02:49:26.103900 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-26 02:49:26.104006 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-26 02:49:26.104019 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-26 02:49:26.104027 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-26 02:49:26.104054 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48d73a84-835d-480a-92c3-3edf7ed142ea', 'scsi-SQEMU_QEMU_HARDDISK_48d73a84-835d-480a-92c3-3edf7ed142ea'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48d73a84-835d-480a-92c3-3edf7ed142ea-part1', 'scsi-SQEMU_QEMU_HARDDISK_48d73a84-835d-480a-92c3-3edf7ed142ea-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48d73a84-835d-480a-92c3-3edf7ed142ea-part14', 'scsi-SQEMU_QEMU_HARDDISK_48d73a84-835d-480a-92c3-3edf7ed142ea-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48d73a84-835d-480a-92c3-3edf7ed142ea-part15', 'scsi-SQEMU_QEMU_HARDDISK_48d73a84-835d-480a-92c3-3edf7ed142ea-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48d73a84-835d-480a-92c3-3edf7ed142ea-part16', 'scsi-SQEMU_QEMU_HARDDISK_48d73a84-835d-480a-92c3-3edf7ed142ea-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-26 02:49:26.104066 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--a652979e--9f40--503a--bbc8--6de5e605991e-osd--block--a652979e--9f40--503a--bbc8--6de5e605991e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-eoBjP8-dDdJ-3FQm-pH7P-5B72-c1L3-mABWfX', 'scsi-0QEMU_QEMU_HARDDISK_7db5f133-fe7b-42a4-ad57-b076dc1856ab', 'scsi-SQEMU_QEMU_HARDDISK_7db5f133-fe7b-42a4-ad57-b076dc1856ab'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-26 02:49:26.104139 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--b5eee7c3--8883--5bbe--be5a--75726e822543-osd--block--b5eee7c3--8883--5bbe--be5a--75726e822543'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Oy69b4-OcVV-F2KD-vi5G-C8ns-n3Cu-1PhYTB', 'scsi-0QEMU_QEMU_HARDDISK_a52ec37c-b4ea-4f83-9b16-3c0f6ce85263', 'scsi-SQEMU_QEMU_HARDDISK_a52ec37c-b4ea-4f83-9b16-3c0f6ce85263'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-26 02:49:26.104156 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7e352b46-e023-45cf-8a88-51cc46240a44', 'scsi-SQEMU_QEMU_HARDDISK_7e352b46-e023-45cf-8a88-51cc46240a44'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-26 02:49:26.379674 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:49:26.379803 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-26-01-38-09-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-26 02:49:26.379830 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--83c4def8--4703--5f7c--9549--7666ff9f2b66-osd--block--83c4def8--4703--5f7c--9549--7666ff9f2b66', 'dm-uuid-LVM-DoNgv1c108dy4eu1pvS7TOCWbuA3UXv0A6zrFIA863mhHtIp5pUFeDHxhomhuceD'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-26 02:49:26.379851 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1fd8de68--da37--5e01--9bf2--5a04fcdcd771-osd--block--1fd8de68--da37--5e01--9bf2--5a04fcdcd771', 'dm-uuid-LVM-Q7trkX6T9bQrenPM1EuezeEWG2QB7ffx0bNZRnQ3R81VwJTdPWktYtRAGSsXVFlp'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-26 02:49:26.379873 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-26 02:49:26.379928 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-26 02:49:26.379980 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-26 02:49:26.380001 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-26 02:49:26.380021 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-26 02:49:26.380068 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-26 02:49:26.380088 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:49:26.380107 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-26 02:49:26.380126 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-26 02:49:26.380158 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4fa924fa-33d9-43ce-b208-159d6f6ab539', 'scsi-SQEMU_QEMU_HARDDISK_4fa924fa-33d9-43ce-b208-159d6f6ab539'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4fa924fa-33d9-43ce-b208-159d6f6ab539-part1', 'scsi-SQEMU_QEMU_HARDDISK_4fa924fa-33d9-43ce-b208-159d6f6ab539-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4fa924fa-33d9-43ce-b208-159d6f6ab539-part14', 'scsi-SQEMU_QEMU_HARDDISK_4fa924fa-33d9-43ce-b208-159d6f6ab539-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4fa924fa-33d9-43ce-b208-159d6f6ab539-part15', 'scsi-SQEMU_QEMU_HARDDISK_4fa924fa-33d9-43ce-b208-159d6f6ab539-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4fa924fa-33d9-43ce-b208-159d6f6ab539-part16', 'scsi-SQEMU_QEMU_HARDDISK_4fa924fa-33d9-43ce-b208-159d6f6ab539-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-26 02:49:26.380197 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--83c4def8--4703--5f7c--9549--7666ff9f2b66-osd--block--83c4def8--4703--5f7c--9549--7666ff9f2b66'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-FriUOI-gUEr-kmP0-nYC7-MoO0-ng3W-Ej90o7', 'scsi-0QEMU_QEMU_HARDDISK_943c088c-5b56-4173-ab64-ec81e1cc816d', 'scsi-SQEMU_QEMU_HARDDISK_943c088c-5b56-4173-ab64-ec81e1cc816d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-26 02:49:26.380232 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--1fd8de68--da37--5e01--9bf2--5a04fcdcd771-osd--block--1fd8de68--da37--5e01--9bf2--5a04fcdcd771'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-xgZSV6-0wfE-zGZo-XmXe-xuiN-RWM0-U4VPgB', 'scsi-0QEMU_QEMU_HARDDISK_47760649-09e9-4ed8-8303-e5ee473a8102', 'scsi-SQEMU_QEMU_HARDDISK_47760649-09e9-4ed8-8303-e5ee473a8102'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-26 02:49:26.550810 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8ddd7966-84e6-4951-8a08-7b4fb4af2bd2', 'scsi-SQEMU_QEMU_HARDDISK_8ddd7966-84e6-4951-8a08-7b4fb4af2bd2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-26 02:49:26.550915 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-26-01-38-15-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-26 02:49:26.550959 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-26 02:49:26.550978 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-26 02:49:26.551009 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-26 02:49:26.551022 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-26 02:49:26.551036 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-26 02:49:26.551050 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-26 02:49:26.551084 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-26 02:49:26.551099 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-26 02:49:26.551113 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:49:26.551138 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c374eb4c-3572-4f0b-927c-38d35765f44a', 'scsi-SQEMU_QEMU_HARDDISK_c374eb4c-3572-4f0b-927c-38d35765f44a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c374eb4c-3572-4f0b-927c-38d35765f44a-part1', 'scsi-SQEMU_QEMU_HARDDISK_c374eb4c-3572-4f0b-927c-38d35765f44a-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c374eb4c-3572-4f0b-927c-38d35765f44a-part14', 'scsi-SQEMU_QEMU_HARDDISK_c374eb4c-3572-4f0b-927c-38d35765f44a-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c374eb4c-3572-4f0b-927c-38d35765f44a-part15', 'scsi-SQEMU_QEMU_HARDDISK_c374eb4c-3572-4f0b-927c-38d35765f44a-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c374eb4c-3572-4f0b-927c-38d35765f44a-part16', 'scsi-SQEMU_QEMU_HARDDISK_c374eb4c-3572-4f0b-927c-38d35765f44a-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-26 02:49:26.551163 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-26-01-38-12-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-26 02:49:26.551178 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-26 02:49:26.551193 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-26 02:49:26.551213 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-26 02:49:26.787792 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-26 02:49:26.787889 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-26 02:49:26.787897 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-26 02:49:26.787902 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-26 02:49:26.787918 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-26 02:49:26.787938 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2e41bcf9-ad92-42bb-b49e-289ca95def9f', 'scsi-SQEMU_QEMU_HARDDISK_2e41bcf9-ad92-42bb-b49e-289ca95def9f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2e41bcf9-ad92-42bb-b49e-289ca95def9f-part1', 'scsi-SQEMU_QEMU_HARDDISK_2e41bcf9-ad92-42bb-b49e-289ca95def9f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2e41bcf9-ad92-42bb-b49e-289ca95def9f-part14', 'scsi-SQEMU_QEMU_HARDDISK_2e41bcf9-ad92-42bb-b49e-289ca95def9f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2e41bcf9-ad92-42bb-b49e-289ca95def9f-part15', 'scsi-SQEMU_QEMU_HARDDISK_2e41bcf9-ad92-42bb-b49e-289ca95def9f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2e41bcf9-ad92-42bb-b49e-289ca95def9f-part16', 'scsi-SQEMU_QEMU_HARDDISK_2e41bcf9-ad92-42bb-b49e-289ca95def9f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-26 02:49:26.787950 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-26-01-38-16-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-26 02:49:26.787956 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:49:26.787962 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:49:26.787967 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-26 02:49:26.787972 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-26 02:49:26.787980 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-26 02:49:26.787985 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-26 02:49:26.787990 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-26 02:49:26.787994 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-26 02:49:26.787999 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-26 02:49:26.788008 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-26 02:49:27.289982 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7634648a-b5a4-45bc-ac0b-8484a2642b22', 'scsi-SQEMU_QEMU_HARDDISK_7634648a-b5a4-45bc-ac0b-8484a2642b22'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7634648a-b5a4-45bc-ac0b-8484a2642b22-part1', 'scsi-SQEMU_QEMU_HARDDISK_7634648a-b5a4-45bc-ac0b-8484a2642b22-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7634648a-b5a4-45bc-ac0b-8484a2642b22-part14', 'scsi-SQEMU_QEMU_HARDDISK_7634648a-b5a4-45bc-ac0b-8484a2642b22-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7634648a-b5a4-45bc-ac0b-8484a2642b22-part15', 'scsi-SQEMU_QEMU_HARDDISK_7634648a-b5a4-45bc-ac0b-8484a2642b22-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7634648a-b5a4-45bc-ac0b-8484a2642b22-part16', 'scsi-SQEMU_QEMU_HARDDISK_7634648a-b5a4-45bc-ac0b-8484a2642b22-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-26 02:49:27.290131 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-26-01-38-10-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-26 02:49:27.290146 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:49:27.290157 | orchestrator | 2026-03-26 02:49:27.290167 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-26 02:49:27.290178 | orchestrator | Thursday 26 March 2026 02:49:26 +0000 (0:00:01.059) 0:00:26.292 ******** 2026-03-26 02:49:27.290190 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--93e8c9a2--b6ff--5fe0--a79e--2922336c3e0a-osd--block--93e8c9a2--b6ff--5fe0--a79e--2922336c3e0a', 'dm-uuid-LVM-NfuOn4R5AkCZoZBaGfCwjgSejX4qlSlby5xuVgNQ7T0MWashc4xC7nHJ3VUNBCRS'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-26 02:49:27.290237 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e2623153--bc41--510f--8884--ef957bb96082-osd--block--e2623153--bc41--510f--8884--ef957bb96082', 'dm-uuid-LVM-8hKVl461SF70Ai5uMDmNdT5BP20Vvkg8AxHs2aTbdloCZd5zRhurro2iqvFnFzRY'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-26 02:49:27.290248 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-26 02:49:27.290258 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-26 02:49:27.290273 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-26 02:49:27.290283 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-26 02:49:27.290293 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-26 02:49:27.290315 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-26 02:49:27.290331 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a652979e--9f40--503a--bbc8--6de5e605991e-osd--block--a652979e--9f40--503a--bbc8--6de5e605991e', 'dm-uuid-LVM-86WEu6duX2Pejl3asW6viK3fsh4aqvqg2h2U7SLeR6PGwru1xY81U9rrCs8siESG'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-26 02:49:27.353148 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-26 02:49:27.353279 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b5eee7c3--8883--5bbe--be5a--75726e822543-osd--block--b5eee7c3--8883--5bbe--be5a--75726e822543', 'dm-uuid-LVM-O1aEkSX5V2TgXKGnqX2peNd9dQhi04NAZJyEqlgfRLjtJKN8JwRgDI1ZPO4R3wgt'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-26 02:49:27.353305 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-26 02:49:27.353322 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-26 02:49:27.353455 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce600cf2-62c4-44aa-8248-5535335c6519', 'scsi-SQEMU_QEMU_HARDDISK_ce600cf2-62c4-44aa-8248-5535335c6519'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce600cf2-62c4-44aa-8248-5535335c6519-part1', 'scsi-SQEMU_QEMU_HARDDISK_ce600cf2-62c4-44aa-8248-5535335c6519-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce600cf2-62c4-44aa-8248-5535335c6519-part14', 'scsi-SQEMU_QEMU_HARDDISK_ce600cf2-62c4-44aa-8248-5535335c6519-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce600cf2-62c4-44aa-8248-5535335c6519-part15', 'scsi-SQEMU_QEMU_HARDDISK_ce600cf2-62c4-44aa-8248-5535335c6519-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce600cf2-62c4-44aa-8248-5535335c6519-part16', 'scsi-SQEMU_QEMU_HARDDISK_ce600cf2-62c4-44aa-8248-5535335c6519-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-26 02:49:27.353489 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-26 02:49:27.353507 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--93e8c9a2--b6ff--5fe0--a79e--2922336c3e0a-osd--block--93e8c9a2--b6ff--5fe0--a79e--2922336c3e0a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-2XKfyD-kvYx-XaUk-IA1D-OFMu-auWL-FeQHCw', 'scsi-0QEMU_QEMU_HARDDISK_d11e4e4a-db1d-44df-8da9-5de7e993dd80', 'scsi-SQEMU_QEMU_HARDDISK_d11e4e4a-db1d-44df-8da9-5de7e993dd80'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-26 02:49:27.353524 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-26 02:49:27.353550 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--e2623153--bc41--510f--8884--ef957bb96082-osd--block--e2623153--bc41--510f--8884--ef957bb96082'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-dxNnp3-HdCF-97hz-w17k-bHEu-opcA-g4y34j', 'scsi-0QEMU_QEMU_HARDDISK_863ba5d2-7e2f-4393-95a6-83543745d331', 'scsi-SQEMU_QEMU_HARDDISK_863ba5d2-7e2f-4393-95a6-83543745d331'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-26 02:49:27.353576 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-26 02:49:27.482830 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2dae49df-17cb-48b5-9940-ec5e7ec792d8', 'scsi-SQEMU_QEMU_HARDDISK_2dae49df-17cb-48b5-9940-ec5e7ec792d8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-26 02:49:27.482951 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-26 02:49:27.482975 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-26-01-38-13-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-26 02:49:27.483016 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-26 02:49:27.483032 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-26 02:49:27.483048 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-26 02:49:27.483098 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48d73a84-835d-480a-92c3-3edf7ed142ea', 'scsi-SQEMU_QEMU_HARDDISK_48d73a84-835d-480a-92c3-3edf7ed142ea'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48d73a84-835d-480a-92c3-3edf7ed142ea-part1', 'scsi-SQEMU_QEMU_HARDDISK_48d73a84-835d-480a-92c3-3edf7ed142ea-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48d73a84-835d-480a-92c3-3edf7ed142ea-part14', 'scsi-SQEMU_QEMU_HARDDISK_48d73a84-835d-480a-92c3-3edf7ed142ea-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48d73a84-835d-480a-92c3-3edf7ed142ea-part15', 'scsi-SQEMU_QEMU_HARDDISK_48d73a84-835d-480a-92c3-3edf7ed142ea-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48d73a84-835d-480a-92c3-3edf7ed142ea-part16', 'scsi-SQEMU_QEMU_HARDDISK_48d73a84-835d-480a-92c3-3edf7ed142ea-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-26 02:49:27.483131 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:49:27.483150 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--a652979e--9f40--503a--bbc8--6de5e605991e-osd--block--a652979e--9f40--503a--bbc8--6de5e605991e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-eoBjP8-dDdJ-3FQm-pH7P-5B72-c1L3-mABWfX', 'scsi-0QEMU_QEMU_HARDDISK_7db5f133-fe7b-42a4-ad57-b076dc1856ab', 'scsi-SQEMU_QEMU_HARDDISK_7db5f133-fe7b-42a4-ad57-b076dc1856ab'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-26 02:49:27.483176 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--b5eee7c3--8883--5bbe--be5a--75726e822543-osd--block--b5eee7c3--8883--5bbe--be5a--75726e822543'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Oy69b4-OcVV-F2KD-vi5G-C8ns-n3Cu-1PhYTB', 'scsi-0QEMU_QEMU_HARDDISK_a52ec37c-b4ea-4f83-9b16-3c0f6ce85263', 'scsi-SQEMU_QEMU_HARDDISK_a52ec37c-b4ea-4f83-9b16-3c0f6ce85263'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-26 02:49:27.597034 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7e352b46-e023-45cf-8a88-51cc46240a44', 'scsi-SQEMU_QEMU_HARDDISK_7e352b46-e023-45cf-8a88-51cc46240a44'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-26 02:49:27.597117 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-26-01-38-09-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-26 02:49:27.597139 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--83c4def8--4703--5f7c--9549--7666ff9f2b66-osd--block--83c4def8--4703--5f7c--9549--7666ff9f2b66', 'dm-uuid-LVM-DoNgv1c108dy4eu1pvS7TOCWbuA3UXv0A6zrFIA863mhHtIp5pUFeDHxhomhuceD'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-26 02:49:27.597144 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1fd8de68--da37--5e01--9bf2--5a04fcdcd771-osd--block--1fd8de68--da37--5e01--9bf2--5a04fcdcd771', 'dm-uuid-LVM-Q7trkX6T9bQrenPM1EuezeEWG2QB7ffx0bNZRnQ3R81VwJTdPWktYtRAGSsXVFlp'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-26 02:49:27.597149 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-26 02:49:27.597170 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-26 02:49:27.597181 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-26 02:49:27.597188 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-26 02:49:27.597199 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-26 02:49:27.597205 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-26 02:49:27.597212 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-26 02:49:27.597218 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-26 02:49:27.597236 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4fa924fa-33d9-43ce-b208-159d6f6ab539', 'scsi-SQEMU_QEMU_HARDDISK_4fa924fa-33d9-43ce-b208-159d6f6ab539'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4fa924fa-33d9-43ce-b208-159d6f6ab539-part1', 'scsi-SQEMU_QEMU_HARDDISK_4fa924fa-33d9-43ce-b208-159d6f6ab539-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4fa924fa-33d9-43ce-b208-159d6f6ab539-part14', 'scsi-SQEMU_QEMU_HARDDISK_4fa924fa-33d9-43ce-b208-159d6f6ab539-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4fa924fa-33d9-43ce-b208-159d6f6ab539-part15', 'scsi-SQEMU_QEMU_HARDDISK_4fa924fa-33d9-43ce-b208-159d6f6ab539-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4fa924fa-33d9-43ce-b208-159d6f6ab539-part16', 'scsi-SQEMU_QEMU_HARDDISK_4fa924fa-33d9-43ce-b208-159d6f6ab539-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-26 02:49:27.811994 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--83c4def8--4703--5f7c--9549--7666ff9f2b66-osd--block--83c4def8--4703--5f7c--9549--7666ff9f2b66'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-FriUOI-gUEr-kmP0-nYC7-MoO0-ng3W-Ej90o7', 'scsi-0QEMU_QEMU_HARDDISK_943c088c-5b56-4173-ab64-ec81e1cc816d', 'scsi-SQEMU_QEMU_HARDDISK_943c088c-5b56-4173-ab64-ec81e1cc816d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-26 02:49:27.812119 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--1fd8de68--da37--5e01--9bf2--5a04fcdcd771-osd--block--1fd8de68--da37--5e01--9bf2--5a04fcdcd771'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-xgZSV6-0wfE-zGZo-XmXe-xuiN-RWM0-U4VPgB', 'scsi-0QEMU_QEMU_HARDDISK_47760649-09e9-4ed8-8303-e5ee473a8102', 'scsi-SQEMU_QEMU_HARDDISK_47760649-09e9-4ed8-8303-e5ee473a8102'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-26 02:49:27.812138 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:49:27.812154 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8ddd7966-84e6-4951-8a08-7b4fb4af2bd2', 'scsi-SQEMU_QEMU_HARDDISK_8ddd7966-84e6-4951-8a08-7b4fb4af2bd2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-26 02:49:27.812168 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-26-01-38-15-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-26 02:49:27.812230 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-26 02:49:27.812245 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-26 02:49:27.812257 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-26 02:49:27.812316 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-26 02:49:27.812335 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-26 02:49:27.812347 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-26 02:49:27.812446 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-26 02:49:27.812471 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-26 02:49:28.053219 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c374eb4c-3572-4f0b-927c-38d35765f44a', 'scsi-SQEMU_QEMU_HARDDISK_c374eb4c-3572-4f0b-927c-38d35765f44a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c374eb4c-3572-4f0b-927c-38d35765f44a-part1', 'scsi-SQEMU_QEMU_HARDDISK_c374eb4c-3572-4f0b-927c-38d35765f44a-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c374eb4c-3572-4f0b-927c-38d35765f44a-part14', 'scsi-SQEMU_QEMU_HARDDISK_c374eb4c-3572-4f0b-927c-38d35765f44a-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c374eb4c-3572-4f0b-927c-38d35765f44a-part15', 'scsi-SQEMU_QEMU_HARDDISK_c374eb4c-3572-4f0b-927c-38d35765f44a-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c374eb4c-3572-4f0b-927c-38d35765f44a-part16', 'scsi-SQEMU_QEMU_HARDDISK_c374eb4c-3572-4f0b-927c-38d35765f44a-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-26 02:49:28.053327 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-26-01-38-12-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-26 02:49:28.053442 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:49:28.053467 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-26 02:49:28.053500 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-26 02:49:28.053513 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-26 02:49:28.053524 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-26 02:49:28.053536 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-26 02:49:28.053556 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-26 02:49:28.053577 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-26 02:49:28.053589 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-26 02:49:28.053612 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2e41bcf9-ad92-42bb-b49e-289ca95def9f', 'scsi-SQEMU_QEMU_HARDDISK_2e41bcf9-ad92-42bb-b49e-289ca95def9f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2e41bcf9-ad92-42bb-b49e-289ca95def9f-part1', 'scsi-SQEMU_QEMU_HARDDISK_2e41bcf9-ad92-42bb-b49e-289ca95def9f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2e41bcf9-ad92-42bb-b49e-289ca95def9f-part14', 'scsi-SQEMU_QEMU_HARDDISK_2e41bcf9-ad92-42bb-b49e-289ca95def9f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2e41bcf9-ad92-42bb-b49e-289ca95def9f-part15', 'scsi-SQEMU_QEMU_HARDDISK_2e41bcf9-ad92-42bb-b49e-289ca95def9f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2e41bcf9-ad92-42bb-b49e-289ca95def9f-part16', 'scsi-SQEMU_QEMU_HARDDISK_2e41bcf9-ad92-42bb-b49e-289ca95def9f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-26 02:49:28.312537 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-26-01-38-16-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-26 02:49:28.312666 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:49:28.312686 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:49:28.312699 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-26 02:49:28.312714 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-26 02:49:28.312726 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-26 02:49:28.312738 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-26 02:49:28.312749 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-26 02:49:28.312831 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-26 02:49:28.312845 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-26 02:49:28.312857 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-26 02:49:28.312872 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7634648a-b5a4-45bc-ac0b-8484a2642b22', 'scsi-SQEMU_QEMU_HARDDISK_7634648a-b5a4-45bc-ac0b-8484a2642b22'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7634648a-b5a4-45bc-ac0b-8484a2642b22-part1', 'scsi-SQEMU_QEMU_HARDDISK_7634648a-b5a4-45bc-ac0b-8484a2642b22-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7634648a-b5a4-45bc-ac0b-8484a2642b22-part14', 'scsi-SQEMU_QEMU_HARDDISK_7634648a-b5a4-45bc-ac0b-8484a2642b22-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7634648a-b5a4-45bc-ac0b-8484a2642b22-part15', 'scsi-SQEMU_QEMU_HARDDISK_7634648a-b5a4-45bc-ac0b-8484a2642b22-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7634648a-b5a4-45bc-ac0b-8484a2642b22-part16', 'scsi-SQEMU_QEMU_HARDDISK_7634648a-b5a4-45bc-ac0b-8484a2642b22-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-26 02:49:28.312909 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-26-01-38-10-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-26 02:49:40.887048 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:49:40.887174 | orchestrator | 2026-03-26 02:49:40.887188 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-26 02:49:40.887200 | orchestrator | Thursday 26 March 2026 02:49:28 +0000 (0:00:01.520) 0:00:27.813 ******** 2026-03-26 02:49:40.887208 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:49:40.887217 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:49:40.887225 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:49:40.887234 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:49:40.887242 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:49:40.887250 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:49:40.887258 | orchestrator | 2026-03-26 02:49:40.887266 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-26 02:49:40.887274 | orchestrator | Thursday 26 March 2026 02:49:29 +0000 (0:00:00.948) 0:00:28.761 ******** 2026-03-26 02:49:40.887282 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:49:40.887290 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:49:40.887298 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:49:40.887306 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:49:40.887314 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:49:40.887322 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:49:40.887330 | orchestrator | 2026-03-26 02:49:40.887338 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-26 02:49:40.887346 | orchestrator | Thursday 26 March 2026 02:49:30 +0000 (0:00:00.885) 0:00:29.647 ******** 2026-03-26 02:49:40.887355 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:49:40.887363 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:49:40.887409 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:49:40.887418 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:49:40.887426 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:49:40.887434 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:49:40.887442 | orchestrator | 2026-03-26 02:49:40.887450 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-26 02:49:40.887459 | orchestrator | Thursday 26 March 2026 02:49:31 +0000 (0:00:00.928) 0:00:30.575 ******** 2026-03-26 02:49:40.887468 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:49:40.887476 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:49:40.887484 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:49:40.887492 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:49:40.887500 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:49:40.887508 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:49:40.887517 | orchestrator | 2026-03-26 02:49:40.887525 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-26 02:49:40.887533 | orchestrator | Thursday 26 March 2026 02:49:31 +0000 (0:00:00.926) 0:00:31.501 ******** 2026-03-26 02:49:40.887541 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:49:40.887549 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:49:40.887557 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:49:40.887586 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:49:40.887595 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:49:40.887603 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:49:40.887613 | orchestrator | 2026-03-26 02:49:40.887622 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-26 02:49:40.887631 | orchestrator | Thursday 26 March 2026 02:49:32 +0000 (0:00:00.664) 0:00:32.166 ******** 2026-03-26 02:49:40.887640 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:49:40.887649 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:49:40.887657 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:49:40.887666 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:49:40.887675 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:49:40.887684 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:49:40.887693 | orchestrator | 2026-03-26 02:49:40.887702 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-26 02:49:40.887711 | orchestrator | Thursday 26 March 2026 02:49:33 +0000 (0:00:00.925) 0:00:33.092 ******** 2026-03-26 02:49:40.887721 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-03-26 02:49:40.887730 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-03-26 02:49:40.887740 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-03-26 02:49:40.887749 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-03-26 02:49:40.887757 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-03-26 02:49:40.887765 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-26 02:49:40.887773 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-03-26 02:49:40.887780 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-03-26 02:49:40.887788 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-03-26 02:49:40.887796 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-03-26 02:49:40.887804 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-26 02:49:40.887812 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-03-26 02:49:40.887820 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-26 02:49:40.887828 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-03-26 02:49:40.887836 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-26 02:49:40.887844 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-03-26 02:49:40.887852 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-03-26 02:49:40.887873 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-26 02:49:40.887881 | orchestrator | 2026-03-26 02:49:40.887889 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-26 02:49:40.887897 | orchestrator | Thursday 26 March 2026 02:49:35 +0000 (0:00:01.820) 0:00:34.912 ******** 2026-03-26 02:49:40.887906 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-26 02:49:40.887914 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-26 02:49:40.887922 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-26 02:49:40.887930 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:49:40.887938 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-26 02:49:40.887946 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-26 02:49:40.887954 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-26 02:49:40.887977 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:49:40.887986 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-26 02:49:40.887994 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-26 02:49:40.888002 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-26 02:49:40.888010 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:49:40.888018 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-26 02:49:40.888026 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-26 02:49:40.888040 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-26 02:49:40.888048 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:49:40.888056 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-26 02:49:40.888064 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-26 02:49:40.888072 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-26 02:49:40.888080 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:49:40.888088 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-26 02:49:40.888096 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-26 02:49:40.888104 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-26 02:49:40.888111 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:49:40.888119 | orchestrator | 2026-03-26 02:49:40.888128 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-26 02:49:40.888136 | orchestrator | Thursday 26 March 2026 02:49:36 +0000 (0:00:01.050) 0:00:35.963 ******** 2026-03-26 02:49:40.888144 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:49:40.888152 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:49:40.888160 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:49:40.888168 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-26 02:49:40.888177 | orchestrator | 2026-03-26 02:49:40.888190 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-26 02:49:40.888204 | orchestrator | Thursday 26 March 2026 02:49:37 +0000 (0:00:01.128) 0:00:37.091 ******** 2026-03-26 02:49:40.888216 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:49:40.888229 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:49:40.888242 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:49:40.888254 | orchestrator | 2026-03-26 02:49:40.888267 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-26 02:49:40.888280 | orchestrator | Thursday 26 March 2026 02:49:37 +0000 (0:00:00.347) 0:00:37.439 ******** 2026-03-26 02:49:40.888294 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:49:40.888307 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:49:40.888319 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:49:40.888332 | orchestrator | 2026-03-26 02:49:40.888344 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-26 02:49:40.888357 | orchestrator | Thursday 26 March 2026 02:49:38 +0000 (0:00:00.379) 0:00:37.819 ******** 2026-03-26 02:49:40.888397 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:49:40.888409 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:49:40.888417 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:49:40.888425 | orchestrator | 2026-03-26 02:49:40.888433 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-26 02:49:40.888441 | orchestrator | Thursday 26 March 2026 02:49:38 +0000 (0:00:00.534) 0:00:38.354 ******** 2026-03-26 02:49:40.888449 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:49:40.888457 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:49:40.888465 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:49:40.888473 | orchestrator | 2026-03-26 02:49:40.888483 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-26 02:49:40.888496 | orchestrator | Thursday 26 March 2026 02:49:39 +0000 (0:00:00.460) 0:00:38.815 ******** 2026-03-26 02:49:40.888509 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-26 02:49:40.888521 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-26 02:49:40.888534 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-26 02:49:40.888547 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:49:40.888560 | orchestrator | 2026-03-26 02:49:40.888574 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-26 02:49:40.888599 | orchestrator | Thursday 26 March 2026 02:49:39 +0000 (0:00:00.387) 0:00:39.203 ******** 2026-03-26 02:49:40.888613 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-26 02:49:40.888626 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-26 02:49:40.888639 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-26 02:49:40.888652 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:49:40.888665 | orchestrator | 2026-03-26 02:49:40.888679 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-26 02:49:40.888692 | orchestrator | Thursday 26 March 2026 02:49:40 +0000 (0:00:00.428) 0:00:39.631 ******** 2026-03-26 02:49:40.888712 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-26 02:49:40.888721 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-26 02:49:40.888729 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-26 02:49:40.888737 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:49:40.888745 | orchestrator | 2026-03-26 02:49:40.888753 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-26 02:49:40.888761 | orchestrator | Thursday 26 March 2026 02:49:40 +0000 (0:00:00.401) 0:00:40.032 ******** 2026-03-26 02:49:40.888769 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:49:40.888777 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:49:40.888785 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:49:40.888793 | orchestrator | 2026-03-26 02:49:40.888802 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-26 02:49:40.888818 | orchestrator | Thursday 26 March 2026 02:49:40 +0000 (0:00:00.355) 0:00:40.388 ******** 2026-03-26 02:50:01.593829 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-26 02:50:01.593943 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-26 02:50:01.593959 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-26 02:50:01.593971 | orchestrator | 2026-03-26 02:50:01.593984 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-26 02:50:01.593997 | orchestrator | Thursday 26 March 2026 02:49:41 +0000 (0:00:01.045) 0:00:41.433 ******** 2026-03-26 02:50:01.594008 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-26 02:50:01.594074 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-26 02:50:01.594087 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-26 02:50:01.594100 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-26 02:50:01.594111 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-26 02:50:01.594123 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-26 02:50:01.594133 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-26 02:50:01.594144 | orchestrator | 2026-03-26 02:50:01.594156 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-26 02:50:01.594167 | orchestrator | Thursday 26 March 2026 02:49:42 +0000 (0:00:00.821) 0:00:42.255 ******** 2026-03-26 02:50:01.594178 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-26 02:50:01.594189 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-26 02:50:01.594200 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-26 02:50:01.594211 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-26 02:50:01.594222 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-26 02:50:01.594233 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-26 02:50:01.594244 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-26 02:50:01.594255 | orchestrator | 2026-03-26 02:50:01.594266 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-26 02:50:01.594302 | orchestrator | Thursday 26 March 2026 02:49:44 +0000 (0:00:02.014) 0:00:44.269 ******** 2026-03-26 02:50:01.594315 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 02:50:01.594328 | orchestrator | 2026-03-26 02:50:01.594339 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-26 02:50:01.594350 | orchestrator | Thursday 26 March 2026 02:49:46 +0000 (0:00:01.312) 0:00:45.582 ******** 2026-03-26 02:50:01.594361 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 02:50:01.594372 | orchestrator | 2026-03-26 02:50:01.594383 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-26 02:50:01.594437 | orchestrator | Thursday 26 March 2026 02:49:47 +0000 (0:00:01.295) 0:00:46.877 ******** 2026-03-26 02:50:01.594449 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:50:01.594460 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:50:01.594471 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:50:01.594482 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:50:01.594493 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:50:01.594504 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:50:01.594515 | orchestrator | 2026-03-26 02:50:01.594526 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-26 02:50:01.594537 | orchestrator | Thursday 26 March 2026 02:49:48 +0000 (0:00:01.274) 0:00:48.152 ******** 2026-03-26 02:50:01.594548 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:50:01.594559 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:50:01.594570 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:50:01.594581 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:50:01.594592 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:50:01.594603 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:50:01.594613 | orchestrator | 2026-03-26 02:50:01.594624 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-26 02:50:01.594635 | orchestrator | Thursday 26 March 2026 02:49:49 +0000 (0:00:00.747) 0:00:48.900 ******** 2026-03-26 02:50:01.594646 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:50:01.594663 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:50:01.594681 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:50:01.594700 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:50:01.594723 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:50:01.594748 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:50:01.594785 | orchestrator | 2026-03-26 02:50:01.594803 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-26 02:50:01.594821 | orchestrator | Thursday 26 March 2026 02:49:50 +0000 (0:00:01.056) 0:00:49.957 ******** 2026-03-26 02:50:01.594839 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:50:01.594859 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:50:01.594876 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:50:01.594888 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:50:01.594899 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:50:01.594910 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:50:01.594921 | orchestrator | 2026-03-26 02:50:01.594932 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-26 02:50:01.594943 | orchestrator | Thursday 26 March 2026 02:49:51 +0000 (0:00:00.725) 0:00:50.682 ******** 2026-03-26 02:50:01.594954 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:50:01.594965 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:50:01.594997 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:50:01.595009 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:50:01.595020 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:50:01.595031 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:50:01.595042 | orchestrator | 2026-03-26 02:50:01.595053 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-26 02:50:01.595077 | orchestrator | Thursday 26 March 2026 02:49:52 +0000 (0:00:01.363) 0:00:52.046 ******** 2026-03-26 02:50:01.595088 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:50:01.595099 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:50:01.595110 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:50:01.595121 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:50:01.595132 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:50:01.595143 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:50:01.595154 | orchestrator | 2026-03-26 02:50:01.595165 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-26 02:50:01.595176 | orchestrator | Thursday 26 March 2026 02:49:53 +0000 (0:00:00.664) 0:00:52.710 ******** 2026-03-26 02:50:01.595187 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:50:01.595198 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:50:01.595209 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:50:01.595220 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:50:01.595231 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:50:01.595242 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:50:01.595253 | orchestrator | 2026-03-26 02:50:01.595264 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-26 02:50:01.595275 | orchestrator | Thursday 26 March 2026 02:49:54 +0000 (0:00:00.893) 0:00:53.604 ******** 2026-03-26 02:50:01.595286 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:50:01.595296 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:50:01.595307 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:50:01.595318 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:50:01.595329 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:50:01.595340 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:50:01.595351 | orchestrator | 2026-03-26 02:50:01.595362 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-26 02:50:01.595373 | orchestrator | Thursday 26 March 2026 02:49:55 +0000 (0:00:01.075) 0:00:54.679 ******** 2026-03-26 02:50:01.595384 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:50:01.595419 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:50:01.595430 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:50:01.595441 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:50:01.595452 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:50:01.595463 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:50:01.595474 | orchestrator | 2026-03-26 02:50:01.595485 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-26 02:50:01.595496 | orchestrator | Thursday 26 March 2026 02:49:56 +0000 (0:00:01.406) 0:00:56.086 ******** 2026-03-26 02:50:01.595507 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:50:01.595518 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:50:01.595529 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:50:01.595540 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:50:01.595552 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:50:01.595562 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:50:01.595574 | orchestrator | 2026-03-26 02:50:01.595585 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-26 02:50:01.595596 | orchestrator | Thursday 26 March 2026 02:49:57 +0000 (0:00:00.657) 0:00:56.744 ******** 2026-03-26 02:50:01.595607 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:50:01.595618 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:50:01.595629 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:50:01.595640 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:50:01.595651 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:50:01.595662 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:50:01.595673 | orchestrator | 2026-03-26 02:50:01.595685 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-26 02:50:01.595696 | orchestrator | Thursday 26 March 2026 02:49:58 +0000 (0:00:00.990) 0:00:57.735 ******** 2026-03-26 02:50:01.595707 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:50:01.595718 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:50:01.595736 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:50:01.595747 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:50:01.595758 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:50:01.595769 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:50:01.595780 | orchestrator | 2026-03-26 02:50:01.595791 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-26 02:50:01.595802 | orchestrator | Thursday 26 March 2026 02:49:58 +0000 (0:00:00.682) 0:00:58.417 ******** 2026-03-26 02:50:01.595813 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:50:01.595824 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:50:01.595835 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:50:01.595846 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:50:01.595857 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:50:01.595868 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:50:01.595879 | orchestrator | 2026-03-26 02:50:01.595890 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-26 02:50:01.595901 | orchestrator | Thursday 26 March 2026 02:49:59 +0000 (0:00:00.876) 0:00:59.293 ******** 2026-03-26 02:50:01.595912 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:50:01.595923 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:50:01.595934 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:50:01.595945 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:50:01.595956 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:50:01.595973 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:50:01.595984 | orchestrator | 2026-03-26 02:50:01.595995 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-26 02:50:01.596006 | orchestrator | Thursday 26 March 2026 02:50:00 +0000 (0:00:00.609) 0:00:59.903 ******** 2026-03-26 02:50:01.596017 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:50:01.596028 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:50:01.596039 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:50:01.596050 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:50:01.596061 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:50:01.596072 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:50:01.596083 | orchestrator | 2026-03-26 02:50:01.596093 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-26 02:50:01.596105 | orchestrator | Thursday 26 March 2026 02:50:01 +0000 (0:00:00.898) 0:01:00.801 ******** 2026-03-26 02:50:01.596116 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:50:01.596133 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:51:23.020310 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:51:23.020422 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:51:23.020434 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:51:23.020441 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:51:23.020447 | orchestrator | 2026-03-26 02:51:23.020455 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-26 02:51:23.020524 | orchestrator | Thursday 26 March 2026 02:50:01 +0000 (0:00:00.603) 0:01:01.405 ******** 2026-03-26 02:51:23.020532 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:51:23.020538 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:51:23.020545 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:51:23.020561 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:51:23.020569 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:51:23.020583 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:51:23.020590 | orchestrator | 2026-03-26 02:51:23.020596 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-26 02:51:23.020603 | orchestrator | Thursday 26 March 2026 02:50:02 +0000 (0:00:00.919) 0:01:02.324 ******** 2026-03-26 02:51:23.020609 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:51:23.020615 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:51:23.020622 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:51:23.020628 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:51:23.020634 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:51:23.020640 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:51:23.020667 | orchestrator | 2026-03-26 02:51:23.020674 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-26 02:51:23.020680 | orchestrator | Thursday 26 March 2026 02:50:03 +0000 (0:00:00.998) 0:01:03.322 ******** 2026-03-26 02:51:23.020687 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:51:23.020693 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:51:23.020699 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:51:23.020705 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:51:23.020712 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:51:23.020718 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:51:23.020725 | orchestrator | 2026-03-26 02:51:23.020736 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-26 02:51:23.020746 | orchestrator | Thursday 26 March 2026 02:50:05 +0000 (0:00:01.382) 0:01:04.705 ******** 2026-03-26 02:51:23.020756 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:51:23.020766 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:51:23.020776 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:51:23.020786 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:51:23.020796 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:51:23.020806 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:51:23.020816 | orchestrator | 2026-03-26 02:51:23.020828 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-26 02:51:23.020840 | orchestrator | Thursday 26 March 2026 02:50:06 +0000 (0:00:01.512) 0:01:06.217 ******** 2026-03-26 02:51:23.020850 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:51:23.020860 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:51:23.020867 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:51:23.020874 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:51:23.020880 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:51:23.020886 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:51:23.020893 | orchestrator | 2026-03-26 02:51:23.020899 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-26 02:51:23.020906 | orchestrator | Thursday 26 March 2026 02:50:08 +0000 (0:00:02.252) 0:01:08.470 ******** 2026-03-26 02:51:23.020913 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 02:51:23.020921 | orchestrator | 2026-03-26 02:51:23.020928 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-26 02:51:23.020934 | orchestrator | Thursday 26 March 2026 02:50:10 +0000 (0:00:01.342) 0:01:09.812 ******** 2026-03-26 02:51:23.020941 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:51:23.020947 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:51:23.020953 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:51:23.020959 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:51:23.020966 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:51:23.020972 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:51:23.020978 | orchestrator | 2026-03-26 02:51:23.020985 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-26 02:51:23.020991 | orchestrator | Thursday 26 March 2026 02:50:10 +0000 (0:00:00.682) 0:01:10.495 ******** 2026-03-26 02:51:23.020997 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:51:23.021003 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:51:23.021010 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:51:23.021016 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:51:23.021022 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:51:23.021028 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:51:23.021034 | orchestrator | 2026-03-26 02:51:23.021041 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-26 02:51:23.021047 | orchestrator | Thursday 26 March 2026 02:50:11 +0000 (0:00:00.966) 0:01:11.462 ******** 2026-03-26 02:51:23.021053 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-26 02:51:23.021072 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-26 02:51:23.021085 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-26 02:51:23.021091 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-26 02:51:23.021098 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-26 02:51:23.021104 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-26 02:51:23.021111 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-26 02:51:23.021118 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-26 02:51:23.021124 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-26 02:51:23.021146 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-26 02:51:23.021153 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-26 02:51:23.021159 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-26 02:51:23.021166 | orchestrator | 2026-03-26 02:51:23.021172 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-26 02:51:23.021178 | orchestrator | Thursday 26 March 2026 02:50:13 +0000 (0:00:01.348) 0:01:12.810 ******** 2026-03-26 02:51:23.021184 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:51:23.021191 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:51:23.021197 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:51:23.021203 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:51:23.021210 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:51:23.021216 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:51:23.021222 | orchestrator | 2026-03-26 02:51:23.021228 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-26 02:51:23.021235 | orchestrator | Thursday 26 March 2026 02:50:14 +0000 (0:00:01.234) 0:01:14.045 ******** 2026-03-26 02:51:23.021241 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:51:23.021247 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:51:23.021254 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:51:23.021260 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:51:23.021266 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:51:23.021272 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:51:23.021279 | orchestrator | 2026-03-26 02:51:23.021285 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-26 02:51:23.021291 | orchestrator | Thursday 26 March 2026 02:50:15 +0000 (0:00:00.679) 0:01:14.725 ******** 2026-03-26 02:51:23.021298 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:51:23.021304 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:51:23.021310 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:51:23.021317 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:51:23.021323 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:51:23.021329 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:51:23.021335 | orchestrator | 2026-03-26 02:51:23.021342 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-26 02:51:23.021348 | orchestrator | Thursday 26 March 2026 02:50:16 +0000 (0:00:00.941) 0:01:15.667 ******** 2026-03-26 02:51:23.021354 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:51:23.021361 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:51:23.021367 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:51:23.021373 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:51:23.021380 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:51:23.021386 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:51:23.021392 | orchestrator | 2026-03-26 02:51:23.021398 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-26 02:51:23.021405 | orchestrator | Thursday 26 March 2026 02:50:16 +0000 (0:00:00.626) 0:01:16.293 ******** 2026-03-26 02:51:23.021416 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 02:51:23.021422 | orchestrator | 2026-03-26 02:51:23.021429 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-26 02:51:23.021435 | orchestrator | Thursday 26 March 2026 02:50:18 +0000 (0:00:01.326) 0:01:17.620 ******** 2026-03-26 02:51:23.021441 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:51:23.021448 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:51:23.021454 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:51:23.021478 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:51:23.021485 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:51:23.021491 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:51:23.021497 | orchestrator | 2026-03-26 02:51:23.021504 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-26 02:51:23.021510 | orchestrator | Thursday 26 March 2026 02:51:22 +0000 (0:01:04.165) 0:02:21.785 ******** 2026-03-26 02:51:23.021516 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-26 02:51:23.021523 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-26 02:51:23.021529 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-26 02:51:23.021535 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:51:23.021542 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-26 02:51:23.021548 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-26 02:51:23.021554 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-26 02:51:23.021560 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:51:23.021567 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-26 02:51:23.021573 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-26 02:51:23.021583 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-26 02:51:23.021590 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:51:23.021596 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-26 02:51:23.021603 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-26 02:51:23.021609 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-26 02:51:23.021615 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:51:23.021622 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-26 02:51:23.021628 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-26 02:51:23.021634 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-26 02:51:23.021645 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:51:47.521877 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-26 02:51:47.522005 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-26 02:51:47.522090 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-26 02:51:47.522105 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:51:47.522118 | orchestrator | 2026-03-26 02:51:47.522130 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-26 02:51:47.522141 | orchestrator | Thursday 26 March 2026 02:51:23 +0000 (0:00:00.742) 0:02:22.528 ******** 2026-03-26 02:51:47.522152 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:51:47.522163 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:51:47.522175 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:51:47.522186 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:51:47.522197 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:51:47.522233 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:51:47.522245 | orchestrator | 2026-03-26 02:51:47.522257 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-26 02:51:47.522268 | orchestrator | Thursday 26 March 2026 02:51:23 +0000 (0:00:00.947) 0:02:23.475 ******** 2026-03-26 02:51:47.522280 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:51:47.522291 | orchestrator | 2026-03-26 02:51:47.522304 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-26 02:51:47.522317 | orchestrator | Thursday 26 March 2026 02:51:24 +0000 (0:00:00.167) 0:02:23.643 ******** 2026-03-26 02:51:47.522330 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:51:47.522343 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:51:47.522355 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:51:47.522367 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:51:47.522380 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:51:47.522393 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:51:47.522405 | orchestrator | 2026-03-26 02:51:47.522418 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-26 02:51:47.522431 | orchestrator | Thursday 26 March 2026 02:51:24 +0000 (0:00:00.674) 0:02:24.318 ******** 2026-03-26 02:51:47.522443 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:51:47.522456 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:51:47.522468 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:51:47.522499 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:51:47.522512 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:51:47.522524 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:51:47.522536 | orchestrator | 2026-03-26 02:51:47.522549 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-26 02:51:47.522562 | orchestrator | Thursday 26 March 2026 02:51:25 +0000 (0:00:00.881) 0:02:25.199 ******** 2026-03-26 02:51:47.522574 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:51:47.522586 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:51:47.522598 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:51:47.522611 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:51:47.522624 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:51:47.522636 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:51:47.522648 | orchestrator | 2026-03-26 02:51:47.522660 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-26 02:51:47.522671 | orchestrator | Thursday 26 March 2026 02:51:26 +0000 (0:00:00.695) 0:02:25.894 ******** 2026-03-26 02:51:47.522682 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:51:47.522695 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:51:47.522705 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:51:47.522716 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:51:47.522727 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:51:47.522738 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:51:47.522749 | orchestrator | 2026-03-26 02:51:47.522760 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-26 02:51:47.522771 | orchestrator | Thursday 26 March 2026 02:51:29 +0000 (0:00:03.470) 0:02:29.365 ******** 2026-03-26 02:51:47.522782 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:51:47.522793 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:51:47.522803 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:51:47.522814 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:51:47.522825 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:51:47.522835 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:51:47.522846 | orchestrator | 2026-03-26 02:51:47.522857 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-26 02:51:47.522868 | orchestrator | Thursday 26 March 2026 02:51:30 +0000 (0:00:00.685) 0:02:30.050 ******** 2026-03-26 02:51:47.522881 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 02:51:47.522893 | orchestrator | 2026-03-26 02:51:47.522905 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-26 02:51:47.522923 | orchestrator | Thursday 26 March 2026 02:51:31 +0000 (0:00:01.322) 0:02:31.372 ******** 2026-03-26 02:51:47.522934 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:51:47.522945 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:51:47.522956 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:51:47.522967 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:51:47.522991 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:51:47.523003 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:51:47.523014 | orchestrator | 2026-03-26 02:51:47.523025 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-26 02:51:47.523036 | orchestrator | Thursday 26 March 2026 02:51:32 +0000 (0:00:00.904) 0:02:32.276 ******** 2026-03-26 02:51:47.523046 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:51:47.523057 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:51:47.523069 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:51:47.523080 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:51:47.523091 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:51:47.523101 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:51:47.523112 | orchestrator | 2026-03-26 02:51:47.523123 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-26 02:51:47.523134 | orchestrator | Thursday 26 March 2026 02:51:33 +0000 (0:00:00.626) 0:02:32.903 ******** 2026-03-26 02:51:47.523146 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:51:47.523174 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:51:47.523186 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:51:47.523197 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:51:47.523208 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:51:47.523219 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:51:47.523230 | orchestrator | 2026-03-26 02:51:47.523241 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-26 02:51:47.523252 | orchestrator | Thursday 26 March 2026 02:51:34 +0000 (0:00:00.898) 0:02:33.801 ******** 2026-03-26 02:51:47.523263 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:51:47.523274 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:51:47.523285 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:51:47.523295 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:51:47.523306 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:51:47.523317 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:51:47.523328 | orchestrator | 2026-03-26 02:51:47.523339 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-26 02:51:47.523350 | orchestrator | Thursday 26 March 2026 02:51:34 +0000 (0:00:00.702) 0:02:34.503 ******** 2026-03-26 02:51:47.523361 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:51:47.523371 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:51:47.523382 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:51:47.523393 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:51:47.523404 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:51:47.523415 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:51:47.523426 | orchestrator | 2026-03-26 02:51:47.523437 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-26 02:51:47.523448 | orchestrator | Thursday 26 March 2026 02:51:35 +0000 (0:00:00.898) 0:02:35.402 ******** 2026-03-26 02:51:47.523459 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:51:47.523469 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:51:47.523507 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:51:47.523519 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:51:47.523530 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:51:47.523541 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:51:47.523552 | orchestrator | 2026-03-26 02:51:47.523563 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-26 02:51:47.523574 | orchestrator | Thursday 26 March 2026 02:51:36 +0000 (0:00:00.664) 0:02:36.066 ******** 2026-03-26 02:51:47.523593 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:51:47.523604 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:51:47.523615 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:51:47.523626 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:51:47.523637 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:51:47.523648 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:51:47.523659 | orchestrator | 2026-03-26 02:51:47.523671 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-26 02:51:47.523682 | orchestrator | Thursday 26 March 2026 02:51:37 +0000 (0:00:00.963) 0:02:37.030 ******** 2026-03-26 02:51:47.523693 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:51:47.523704 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:51:47.523715 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:51:47.523726 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:51:47.523737 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:51:47.523748 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:51:47.523759 | orchestrator | 2026-03-26 02:51:47.523770 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-26 02:51:47.523781 | orchestrator | Thursday 26 March 2026 02:51:38 +0000 (0:00:00.915) 0:02:37.945 ******** 2026-03-26 02:51:47.523792 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:51:47.523803 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:51:47.523814 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:51:47.523825 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:51:47.523836 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:51:47.523847 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:51:47.523858 | orchestrator | 2026-03-26 02:51:47.523869 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-26 02:51:47.523880 | orchestrator | Thursday 26 March 2026 02:51:39 +0000 (0:00:01.391) 0:02:39.337 ******** 2026-03-26 02:51:47.523904 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 02:51:47.523918 | orchestrator | 2026-03-26 02:51:47.523929 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-26 02:51:47.523940 | orchestrator | Thursday 26 March 2026 02:51:41 +0000 (0:00:01.321) 0:02:40.658 ******** 2026-03-26 02:51:47.523952 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2026-03-26 02:51:47.523964 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2026-03-26 02:51:47.523975 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2026-03-26 02:51:47.523986 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2026-03-26 02:51:47.523997 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2026-03-26 02:51:47.524008 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2026-03-26 02:51:47.524019 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2026-03-26 02:51:47.524036 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2026-03-26 02:51:47.524047 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2026-03-26 02:51:47.524059 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-03-26 02:51:47.524070 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2026-03-26 02:51:47.524081 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-03-26 02:51:47.524092 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2026-03-26 02:51:47.524103 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-03-26 02:51:47.524114 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2026-03-26 02:51:47.524126 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-03-26 02:51:47.524137 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-03-26 02:51:47.524156 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-03-26 02:51:53.101927 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-03-26 02:51:53.102073 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-03-26 02:51:53.102085 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-03-26 02:51:53.102093 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-03-26 02:51:53.102099 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-03-26 02:51:53.102106 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-03-26 02:51:53.102112 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-03-26 02:51:53.102119 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-03-26 02:51:53.102125 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-03-26 02:51:53.102132 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-03-26 02:51:53.102138 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-03-26 02:51:53.102144 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-03-26 02:51:53.102151 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-03-26 02:51:53.102157 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-03-26 02:51:53.102163 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-03-26 02:51:53.102170 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-03-26 02:51:53.102176 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-03-26 02:51:53.102183 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-03-26 02:51:53.102190 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-03-26 02:51:53.102198 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-03-26 02:51:53.102205 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-03-26 02:51:53.102212 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-03-26 02:51:53.102220 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-03-26 02:51:53.102228 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-03-26 02:51:53.102235 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-03-26 02:51:53.102242 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-03-26 02:51:53.102250 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-03-26 02:51:53.102257 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-03-26 02:51:53.102265 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-26 02:51:53.102272 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-26 02:51:53.102279 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-03-26 02:51:53.102287 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-26 02:51:53.102294 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-26 02:51:53.102301 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-26 02:51:53.102309 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-26 02:51:53.102316 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-03-26 02:51:53.102323 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-26 02:51:53.102331 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-26 02:51:53.102339 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-26 02:51:53.102346 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-26 02:51:53.102354 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-26 02:51:53.102361 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-26 02:51:53.102368 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-26 02:51:53.102382 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-26 02:51:53.102389 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-26 02:51:53.102396 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-26 02:51:53.102404 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-26 02:51:53.102411 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-26 02:51:53.102418 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-26 02:51:53.102438 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-26 02:51:53.102445 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-26 02:51:53.102453 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-26 02:51:53.102460 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-26 02:51:53.102467 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-26 02:51:53.102474 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-26 02:51:53.102481 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-26 02:51:53.102516 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-26 02:51:53.102530 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-26 02:51:53.102553 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-26 02:51:53.102561 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-26 02:51:53.102568 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-26 02:51:53.102576 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-26 02:51:53.102583 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-26 02:51:53.102591 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2026-03-26 02:51:53.102599 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2026-03-26 02:51:53.102606 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2026-03-26 02:51:53.102614 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-26 02:51:53.102621 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-26 02:51:53.102629 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2026-03-26 02:51:53.102636 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2026-03-26 02:51:53.102643 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2026-03-26 02:51:53.102650 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2026-03-26 02:51:53.102658 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-26 02:51:53.102665 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2026-03-26 02:51:53.102673 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2026-03-26 02:51:53.102680 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2026-03-26 02:51:53.102687 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2026-03-26 02:51:53.102695 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2026-03-26 02:51:53.102702 | orchestrator | 2026-03-26 02:51:53.102710 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-26 02:51:53.102718 | orchestrator | Thursday 26 March 2026 02:51:47 +0000 (0:00:06.356) 0:02:47.015 ******** 2026-03-26 02:51:53.102725 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:51:53.102732 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:51:53.102740 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:51:53.102792 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-26 02:51:53.102809 | orchestrator | 2026-03-26 02:51:53.102817 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-03-26 02:51:53.102824 | orchestrator | Thursday 26 March 2026 02:51:48 +0000 (0:00:01.133) 0:02:48.148 ******** 2026-03-26 02:51:53.102831 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-26 02:51:53.102839 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-26 02:51:53.102846 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-26 02:51:53.102854 | orchestrator | 2026-03-26 02:51:53.102861 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-03-26 02:51:53.102868 | orchestrator | Thursday 26 March 2026 02:51:49 +0000 (0:00:00.745) 0:02:48.894 ******** 2026-03-26 02:51:53.102876 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-26 02:51:53.102883 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-26 02:51:53.102891 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-26 02:51:53.102898 | orchestrator | 2026-03-26 02:51:53.102905 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-26 02:51:53.102913 | orchestrator | Thursday 26 March 2026 02:51:50 +0000 (0:00:01.240) 0:02:50.134 ******** 2026-03-26 02:51:53.102920 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:51:53.102927 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:51:53.102935 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:51:53.102942 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:51:53.102949 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:51:53.102956 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:51:53.102964 | orchestrator | 2026-03-26 02:51:53.102971 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-26 02:51:53.102994 | orchestrator | Thursday 26 March 2026 02:51:51 +0000 (0:00:00.920) 0:02:51.054 ******** 2026-03-26 02:51:53.103002 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:51:53.103009 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:51:53.103017 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:51:53.103024 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:51:53.103031 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:51:53.103039 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:51:53.103046 | orchestrator | 2026-03-26 02:51:53.103053 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-26 02:51:53.103061 | orchestrator | Thursday 26 March 2026 02:51:52 +0000 (0:00:00.665) 0:02:51.719 ******** 2026-03-26 02:51:53.103068 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:51:53.103075 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:51:53.103083 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:51:53.103090 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:51:53.103097 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:51:53.103105 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:51:53.103112 | orchestrator | 2026-03-26 02:51:53.103124 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-26 02:52:07.410104 | orchestrator | Thursday 26 March 2026 02:51:53 +0000 (0:00:00.889) 0:02:52.609 ******** 2026-03-26 02:52:07.410207 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:52:07.410222 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:52:07.410233 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:52:07.410243 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:52:07.410252 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:52:07.410261 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:52:07.410290 | orchestrator | 2026-03-26 02:52:07.410300 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-26 02:52:07.410309 | orchestrator | Thursday 26 March 2026 02:51:53 +0000 (0:00:00.631) 0:02:53.240 ******** 2026-03-26 02:52:07.410318 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:52:07.410327 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:52:07.410336 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:52:07.410345 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:52:07.410353 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:52:07.410362 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:52:07.410370 | orchestrator | 2026-03-26 02:52:07.410380 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-26 02:52:07.410390 | orchestrator | Thursday 26 March 2026 02:51:54 +0000 (0:00:00.939) 0:02:54.180 ******** 2026-03-26 02:52:07.410399 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:52:07.410408 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:52:07.410417 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:52:07.410425 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:52:07.410434 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:52:07.410447 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:52:07.410461 | orchestrator | 2026-03-26 02:52:07.410475 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-26 02:52:07.410490 | orchestrator | Thursday 26 March 2026 02:51:55 +0000 (0:00:00.688) 0:02:54.869 ******** 2026-03-26 02:52:07.410535 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:52:07.410549 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:52:07.410562 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:52:07.410576 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:52:07.410589 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:52:07.410603 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:52:07.410615 | orchestrator | 2026-03-26 02:52:07.410628 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-26 02:52:07.410643 | orchestrator | Thursday 26 March 2026 02:51:56 +0000 (0:00:00.934) 0:02:55.803 ******** 2026-03-26 02:52:07.410657 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:52:07.410671 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:52:07.410686 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:52:07.410701 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:52:07.410715 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:52:07.410729 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:52:07.410743 | orchestrator | 2026-03-26 02:52:07.410758 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-26 02:52:07.410773 | orchestrator | Thursday 26 March 2026 02:51:56 +0000 (0:00:00.685) 0:02:56.489 ******** 2026-03-26 02:52:07.410788 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:52:07.410803 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:52:07.410817 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:52:07.410832 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:52:07.410854 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:52:07.410870 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:52:07.410883 | orchestrator | 2026-03-26 02:52:07.410897 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-26 02:52:07.410911 | orchestrator | Thursday 26 March 2026 02:51:59 +0000 (0:00:02.982) 0:02:59.471 ******** 2026-03-26 02:52:07.410925 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:52:07.410937 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:52:07.410949 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:52:07.410962 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:52:07.410976 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:52:07.410990 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:52:07.411005 | orchestrator | 2026-03-26 02:52:07.411021 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-26 02:52:07.411049 | orchestrator | Thursday 26 March 2026 02:52:00 +0000 (0:00:00.688) 0:03:00.159 ******** 2026-03-26 02:52:07.411059 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:52:07.411068 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:52:07.411077 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:52:07.411086 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:52:07.411094 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:52:07.411103 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:52:07.411112 | orchestrator | 2026-03-26 02:52:07.411121 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-26 02:52:07.411130 | orchestrator | Thursday 26 March 2026 02:52:01 +0000 (0:00:00.963) 0:03:01.122 ******** 2026-03-26 02:52:07.411139 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:52:07.411148 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:52:07.411170 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:52:07.411179 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:52:07.411188 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:52:07.411196 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:52:07.411205 | orchestrator | 2026-03-26 02:52:07.411214 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-26 02:52:07.411223 | orchestrator | Thursday 26 March 2026 02:52:02 +0000 (0:00:00.895) 0:03:02.018 ******** 2026-03-26 02:52:07.411232 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-26 02:52:07.411243 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-26 02:52:07.411252 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-26 02:52:07.411262 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:52:07.411290 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:52:07.411300 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:52:07.411309 | orchestrator | 2026-03-26 02:52:07.411317 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-26 02:52:07.411326 | orchestrator | Thursday 26 March 2026 02:52:03 +0000 (0:00:00.702) 0:03:02.720 ******** 2026-03-26 02:52:07.411337 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2026-03-26 02:52:07.411350 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2026-03-26 02:52:07.411360 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:52:07.411369 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2026-03-26 02:52:07.411379 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2026-03-26 02:52:07.411388 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:52:07.411396 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2026-03-26 02:52:07.411412 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2026-03-26 02:52:07.411421 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:52:07.411430 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:52:07.411439 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:52:07.411448 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:52:07.411457 | orchestrator | 2026-03-26 02:52:07.411466 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-26 02:52:07.411475 | orchestrator | Thursday 26 March 2026 02:52:04 +0000 (0:00:01.098) 0:03:03.818 ******** 2026-03-26 02:52:07.411484 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:52:07.411492 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:52:07.411533 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:52:07.411547 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:52:07.411557 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:52:07.411566 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:52:07.411575 | orchestrator | 2026-03-26 02:52:07.411583 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-26 02:52:07.411592 | orchestrator | Thursday 26 March 2026 02:52:04 +0000 (0:00:00.647) 0:03:04.466 ******** 2026-03-26 02:52:07.411601 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:52:07.411610 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:52:07.411619 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:52:07.411628 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:52:07.411636 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:52:07.411645 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:52:07.411654 | orchestrator | 2026-03-26 02:52:07.411663 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-26 02:52:07.411677 | orchestrator | Thursday 26 March 2026 02:52:05 +0000 (0:00:00.871) 0:03:05.337 ******** 2026-03-26 02:52:07.411687 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:52:07.411696 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:52:07.411704 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:52:07.411713 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:52:07.411722 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:52:07.411731 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:52:07.411739 | orchestrator | 2026-03-26 02:52:07.411749 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-26 02:52:07.411767 | orchestrator | Thursday 26 March 2026 02:52:06 +0000 (0:00:00.713) 0:03:06.051 ******** 2026-03-26 02:52:07.411788 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:52:07.411803 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:52:07.411818 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:52:07.411833 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:52:07.411848 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:52:07.411858 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:52:07.411867 | orchestrator | 2026-03-26 02:52:07.411876 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-26 02:52:07.411892 | orchestrator | Thursday 26 March 2026 02:52:07 +0000 (0:00:00.860) 0:03:06.912 ******** 2026-03-26 02:52:25.208101 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:52:25.208190 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:52:25.208197 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:52:25.208202 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:52:25.208206 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:52:25.208211 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:52:25.208232 | orchestrator | 2026-03-26 02:52:25.208238 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-26 02:52:25.208244 | orchestrator | Thursday 26 March 2026 02:52:08 +0000 (0:00:00.682) 0:03:07.594 ******** 2026-03-26 02:52:25.208248 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:52:25.208253 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:52:25.208257 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:52:25.208261 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:52:25.208265 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:52:25.208269 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:52:25.208273 | orchestrator | 2026-03-26 02:52:25.208277 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-26 02:52:25.208282 | orchestrator | Thursday 26 March 2026 02:52:08 +0000 (0:00:00.894) 0:03:08.489 ******** 2026-03-26 02:52:25.208286 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-26 02:52:25.208290 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-26 02:52:25.208294 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-26 02:52:25.208299 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:52:25.208303 | orchestrator | 2026-03-26 02:52:25.208307 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-26 02:52:25.208311 | orchestrator | Thursday 26 March 2026 02:52:09 +0000 (0:00:00.460) 0:03:08.949 ******** 2026-03-26 02:52:25.208315 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-26 02:52:25.208331 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-26 02:52:25.208340 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-26 02:52:25.208344 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:52:25.208348 | orchestrator | 2026-03-26 02:52:25.208352 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-26 02:52:25.208356 | orchestrator | Thursday 26 March 2026 02:52:09 +0000 (0:00:00.447) 0:03:09.396 ******** 2026-03-26 02:52:25.208360 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-26 02:52:25.208364 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-26 02:52:25.208368 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-26 02:52:25.208372 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:52:25.208376 | orchestrator | 2026-03-26 02:52:25.208381 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-26 02:52:25.208388 | orchestrator | Thursday 26 March 2026 02:52:10 +0000 (0:00:00.466) 0:03:09.863 ******** 2026-03-26 02:52:25.208395 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:52:25.208402 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:52:25.208408 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:52:25.208415 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:52:25.208422 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:52:25.208429 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:52:25.208436 | orchestrator | 2026-03-26 02:52:25.208443 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-26 02:52:25.208450 | orchestrator | Thursday 26 March 2026 02:52:11 +0000 (0:00:00.670) 0:03:10.534 ******** 2026-03-26 02:52:25.208456 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-26 02:52:25.208460 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-26 02:52:25.208464 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-26 02:52:25.208468 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-03-26 02:52:25.208472 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:52:25.208476 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-03-26 02:52:25.208480 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:52:25.208484 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-03-26 02:52:25.208488 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:52:25.208492 | orchestrator | 2026-03-26 02:52:25.208496 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-26 02:52:25.208505 | orchestrator | Thursday 26 March 2026 02:52:12 +0000 (0:00:01.812) 0:03:12.346 ******** 2026-03-26 02:52:25.208509 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:52:25.208560 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:52:25.208564 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:52:25.208568 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:52:25.208572 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:52:25.208581 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:52:25.208585 | orchestrator | 2026-03-26 02:52:25.208590 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-26 02:52:25.208593 | orchestrator | Thursday 26 March 2026 02:52:15 +0000 (0:00:02.709) 0:03:15.056 ******** 2026-03-26 02:52:25.208598 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:52:25.208612 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:52:25.208616 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:52:25.208620 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:52:25.208625 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:52:25.208634 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:52:25.208639 | orchestrator | 2026-03-26 02:52:25.208643 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-03-26 02:52:25.208648 | orchestrator | Thursday 26 March 2026 02:52:16 +0000 (0:00:01.012) 0:03:16.069 ******** 2026-03-26 02:52:25.208653 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:52:25.208657 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:52:25.208661 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:52:25.208666 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 02:52:25.208671 | orchestrator | 2026-03-26 02:52:25.208676 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-03-26 02:52:25.208681 | orchestrator | Thursday 26 March 2026 02:52:17 +0000 (0:00:01.151) 0:03:17.220 ******** 2026-03-26 02:52:25.208685 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:52:25.208700 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:52:25.208704 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:52:25.208709 | orchestrator | 2026-03-26 02:52:25.208713 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-03-26 02:52:25.208718 | orchestrator | Thursday 26 March 2026 02:52:18 +0000 (0:00:00.329) 0:03:17.550 ******** 2026-03-26 02:52:25.208723 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:52:25.208727 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:52:25.208732 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:52:25.208736 | orchestrator | 2026-03-26 02:52:25.208741 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-03-26 02:52:25.208745 | orchestrator | Thursday 26 March 2026 02:52:19 +0000 (0:00:01.531) 0:03:19.081 ******** 2026-03-26 02:52:25.208749 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-26 02:52:25.208754 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-26 02:52:25.208758 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-26 02:52:25.208763 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:52:25.208767 | orchestrator | 2026-03-26 02:52:25.208772 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-03-26 02:52:25.208776 | orchestrator | Thursday 26 March 2026 02:52:20 +0000 (0:00:00.693) 0:03:19.775 ******** 2026-03-26 02:52:25.208781 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:52:25.208785 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:52:25.208790 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:52:25.208794 | orchestrator | 2026-03-26 02:52:25.208799 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-03-26 02:52:25.208803 | orchestrator | Thursday 26 March 2026 02:52:20 +0000 (0:00:00.363) 0:03:20.139 ******** 2026-03-26 02:52:25.208807 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:52:25.208812 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:52:25.208816 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:52:25.208825 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-26 02:52:25.208829 | orchestrator | 2026-03-26 02:52:25.208834 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-03-26 02:52:25.208839 | orchestrator | Thursday 26 March 2026 02:52:21 +0000 (0:00:01.200) 0:03:21.340 ******** 2026-03-26 02:52:25.208843 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-26 02:52:25.208848 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-26 02:52:25.208852 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-26 02:52:25.208857 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:52:25.208861 | orchestrator | 2026-03-26 02:52:25.208865 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-03-26 02:52:25.208869 | orchestrator | Thursday 26 March 2026 02:52:22 +0000 (0:00:00.429) 0:03:21.769 ******** 2026-03-26 02:52:25.208873 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:52:25.208877 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:52:25.208881 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:52:25.208885 | orchestrator | 2026-03-26 02:52:25.208889 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-03-26 02:52:25.208893 | orchestrator | Thursday 26 March 2026 02:52:22 +0000 (0:00:00.336) 0:03:22.105 ******** 2026-03-26 02:52:25.208897 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:52:25.208901 | orchestrator | 2026-03-26 02:52:25.208905 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-03-26 02:52:25.208909 | orchestrator | Thursday 26 March 2026 02:52:22 +0000 (0:00:00.246) 0:03:22.352 ******** 2026-03-26 02:52:25.208913 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:52:25.208917 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:52:25.208921 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:52:25.208925 | orchestrator | 2026-03-26 02:52:25.208928 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-03-26 02:52:25.208932 | orchestrator | Thursday 26 March 2026 02:52:23 +0000 (0:00:00.546) 0:03:22.899 ******** 2026-03-26 02:52:25.208936 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:52:25.208940 | orchestrator | 2026-03-26 02:52:25.208944 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-03-26 02:52:25.208948 | orchestrator | Thursday 26 March 2026 02:52:23 +0000 (0:00:00.261) 0:03:23.160 ******** 2026-03-26 02:52:25.208952 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:52:25.208956 | orchestrator | 2026-03-26 02:52:25.208960 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-03-26 02:52:25.208964 | orchestrator | Thursday 26 March 2026 02:52:23 +0000 (0:00:00.255) 0:03:23.416 ******** 2026-03-26 02:52:25.208968 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:52:25.208972 | orchestrator | 2026-03-26 02:52:25.208976 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-03-26 02:52:25.208979 | orchestrator | Thursday 26 March 2026 02:52:24 +0000 (0:00:00.150) 0:03:23.567 ******** 2026-03-26 02:52:25.208986 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:52:25.208990 | orchestrator | 2026-03-26 02:52:25.208994 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-03-26 02:52:25.208998 | orchestrator | Thursday 26 March 2026 02:52:24 +0000 (0:00:00.259) 0:03:23.827 ******** 2026-03-26 02:52:25.209002 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:52:25.209006 | orchestrator | 2026-03-26 02:52:25.209010 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-03-26 02:52:25.209014 | orchestrator | Thursday 26 March 2026 02:52:24 +0000 (0:00:00.238) 0:03:24.065 ******** 2026-03-26 02:52:25.209018 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-26 02:52:25.209022 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-26 02:52:25.209026 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-26 02:52:25.209033 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:52:25.209037 | orchestrator | 2026-03-26 02:52:25.209041 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-03-26 02:52:25.209045 | orchestrator | Thursday 26 March 2026 02:52:25 +0000 (0:00:00.455) 0:03:24.520 ******** 2026-03-26 02:52:25.209052 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:52:44.684100 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:52:44.684232 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:52:44.684254 | orchestrator | 2026-03-26 02:52:44.684302 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-03-26 02:52:44.684319 | orchestrator | Thursday 26 March 2026 02:52:25 +0000 (0:00:00.331) 0:03:24.852 ******** 2026-03-26 02:52:44.684334 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:52:44.684348 | orchestrator | 2026-03-26 02:52:44.684362 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-03-26 02:52:44.684376 | orchestrator | Thursday 26 March 2026 02:52:25 +0000 (0:00:00.265) 0:03:25.117 ******** 2026-03-26 02:52:44.684390 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:52:44.684404 | orchestrator | 2026-03-26 02:52:44.684418 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-03-26 02:52:44.684432 | orchestrator | Thursday 26 March 2026 02:52:26 +0000 (0:00:00.782) 0:03:25.900 ******** 2026-03-26 02:52:44.684446 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:52:44.684460 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:52:44.684474 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:52:44.684489 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-26 02:52:44.684503 | orchestrator | 2026-03-26 02:52:44.684516 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-03-26 02:52:44.684596 | orchestrator | Thursday 26 March 2026 02:52:27 +0000 (0:00:00.893) 0:03:26.794 ******** 2026-03-26 02:52:44.684615 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:52:44.684630 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:52:44.684646 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:52:44.684661 | orchestrator | 2026-03-26 02:52:44.684677 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-03-26 02:52:44.684692 | orchestrator | Thursday 26 March 2026 02:52:27 +0000 (0:00:00.562) 0:03:27.356 ******** 2026-03-26 02:52:44.684707 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:52:44.684723 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:52:44.684737 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:52:44.684753 | orchestrator | 2026-03-26 02:52:44.684768 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-03-26 02:52:44.684782 | orchestrator | Thursday 26 March 2026 02:52:29 +0000 (0:00:01.284) 0:03:28.641 ******** 2026-03-26 02:52:44.684796 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-26 02:52:44.684811 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-26 02:52:44.684826 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-26 02:52:44.684840 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:52:44.684853 | orchestrator | 2026-03-26 02:52:44.684866 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-03-26 02:52:44.684880 | orchestrator | Thursday 26 March 2026 02:52:29 +0000 (0:00:00.679) 0:03:29.321 ******** 2026-03-26 02:52:44.684893 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:52:44.684906 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:52:44.684919 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:52:44.684931 | orchestrator | 2026-03-26 02:52:44.684944 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-03-26 02:52:44.684958 | orchestrator | Thursday 26 March 2026 02:52:30 +0000 (0:00:00.331) 0:03:29.652 ******** 2026-03-26 02:52:44.684971 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:52:44.684984 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:52:44.684997 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:52:44.685040 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-26 02:52:44.685055 | orchestrator | 2026-03-26 02:52:44.685069 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-03-26 02:52:44.685082 | orchestrator | Thursday 26 March 2026 02:52:31 +0000 (0:00:01.196) 0:03:30.849 ******** 2026-03-26 02:52:44.685096 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:52:44.685109 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:52:44.685123 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:52:44.685136 | orchestrator | 2026-03-26 02:52:44.685151 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-03-26 02:52:44.685164 | orchestrator | Thursday 26 March 2026 02:52:31 +0000 (0:00:00.369) 0:03:31.219 ******** 2026-03-26 02:52:44.685178 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:52:44.685191 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:52:44.685206 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:52:44.685219 | orchestrator | 2026-03-26 02:52:44.685234 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-03-26 02:52:44.685248 | orchestrator | Thursday 26 March 2026 02:52:32 +0000 (0:00:01.245) 0:03:32.465 ******** 2026-03-26 02:52:44.685261 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-26 02:52:44.685277 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-26 02:52:44.685311 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-26 02:52:44.685326 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:52:44.685339 | orchestrator | 2026-03-26 02:52:44.685352 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-03-26 02:52:44.685367 | orchestrator | Thursday 26 March 2026 02:52:34 +0000 (0:00:01.165) 0:03:33.630 ******** 2026-03-26 02:52:44.685381 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:52:44.685396 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:52:44.685410 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:52:44.685425 | orchestrator | 2026-03-26 02:52:44.685439 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-03-26 02:52:44.685454 | orchestrator | Thursday 26 March 2026 02:52:34 +0000 (0:00:00.386) 0:03:34.017 ******** 2026-03-26 02:52:44.685469 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:52:44.685483 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:52:44.685497 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:52:44.685510 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:52:44.685524 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:52:44.685565 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:52:44.685579 | orchestrator | 2026-03-26 02:52:44.685618 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-03-26 02:52:44.685631 | orchestrator | Thursday 26 March 2026 02:52:35 +0000 (0:00:00.727) 0:03:34.744 ******** 2026-03-26 02:52:44.685644 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:52:44.685657 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:52:44.685671 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:52:44.685685 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 02:52:44.685700 | orchestrator | 2026-03-26 02:52:44.685714 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-03-26 02:52:44.685729 | orchestrator | Thursday 26 March 2026 02:52:36 +0000 (0:00:01.149) 0:03:35.894 ******** 2026-03-26 02:52:44.685744 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:52:44.685758 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:52:44.685772 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:52:44.685785 | orchestrator | 2026-03-26 02:52:44.685798 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-03-26 02:52:44.685812 | orchestrator | Thursday 26 March 2026 02:52:36 +0000 (0:00:00.381) 0:03:36.275 ******** 2026-03-26 02:52:44.685825 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:52:44.685854 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:52:44.685870 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:52:44.685884 | orchestrator | 2026-03-26 02:52:44.685899 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-03-26 02:52:44.685913 | orchestrator | Thursday 26 March 2026 02:52:38 +0000 (0:00:01.464) 0:03:37.740 ******** 2026-03-26 02:52:44.685928 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-26 02:52:44.685941 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-26 02:52:44.685956 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-26 02:52:44.685970 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:52:44.685984 | orchestrator | 2026-03-26 02:52:44.685999 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-03-26 02:52:44.686014 | orchestrator | Thursday 26 March 2026 02:52:38 +0000 (0:00:00.659) 0:03:38.399 ******** 2026-03-26 02:52:44.686104 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:52:44.686120 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:52:44.686135 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:52:44.686189 | orchestrator | 2026-03-26 02:52:44.686205 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2026-03-26 02:52:44.686220 | orchestrator | 2026-03-26 02:52:44.686234 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-26 02:52:44.686248 | orchestrator | Thursday 26 March 2026 02:52:39 +0000 (0:00:00.630) 0:03:39.030 ******** 2026-03-26 02:52:44.686263 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 02:52:44.686279 | orchestrator | 2026-03-26 02:52:44.686294 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-26 02:52:44.686308 | orchestrator | Thursday 26 March 2026 02:52:40 +0000 (0:00:00.811) 0:03:39.841 ******** 2026-03-26 02:52:44.686321 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 02:52:44.686335 | orchestrator | 2026-03-26 02:52:44.686350 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-26 02:52:44.686364 | orchestrator | Thursday 26 March 2026 02:52:40 +0000 (0:00:00.567) 0:03:40.409 ******** 2026-03-26 02:52:44.686378 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:52:44.686392 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:52:44.686407 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:52:44.686421 | orchestrator | 2026-03-26 02:52:44.686435 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-26 02:52:44.686448 | orchestrator | Thursday 26 March 2026 02:52:41 +0000 (0:00:00.741) 0:03:41.151 ******** 2026-03-26 02:52:44.686462 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:52:44.686476 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:52:44.686490 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:52:44.686504 | orchestrator | 2026-03-26 02:52:44.686518 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-26 02:52:44.686563 | orchestrator | Thursday 26 March 2026 02:52:42 +0000 (0:00:00.587) 0:03:41.738 ******** 2026-03-26 02:52:44.686576 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:52:44.686588 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:52:44.686600 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:52:44.686613 | orchestrator | 2026-03-26 02:52:44.686626 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-26 02:52:44.686639 | orchestrator | Thursday 26 March 2026 02:52:42 +0000 (0:00:00.375) 0:03:42.114 ******** 2026-03-26 02:52:44.686652 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:52:44.686663 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:52:44.686688 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:52:44.686703 | orchestrator | 2026-03-26 02:52:44.686718 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-26 02:52:44.686733 | orchestrator | Thursday 26 March 2026 02:52:42 +0000 (0:00:00.317) 0:03:42.431 ******** 2026-03-26 02:52:44.686762 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:52:44.686775 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:52:44.686788 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:52:44.686802 | orchestrator | 2026-03-26 02:52:44.686816 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-26 02:52:44.686830 | orchestrator | Thursday 26 March 2026 02:52:43 +0000 (0:00:00.729) 0:03:43.160 ******** 2026-03-26 02:52:44.686844 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:52:44.686858 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:52:44.686871 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:52:44.686884 | orchestrator | 2026-03-26 02:52:44.686897 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-26 02:52:44.686910 | orchestrator | Thursday 26 March 2026 02:52:44 +0000 (0:00:00.626) 0:03:43.787 ******** 2026-03-26 02:52:44.686922 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:52:44.686935 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:52:44.686966 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:53:06.915967 | orchestrator | 2026-03-26 02:53:06.916049 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-26 02:53:06.916057 | orchestrator | Thursday 26 March 2026 02:52:44 +0000 (0:00:00.399) 0:03:44.187 ******** 2026-03-26 02:53:06.916063 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:53:06.916069 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:53:06.916074 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:53:06.916079 | orchestrator | 2026-03-26 02:53:06.916084 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-26 02:53:06.916089 | orchestrator | Thursday 26 March 2026 02:52:45 +0000 (0:00:00.853) 0:03:45.040 ******** 2026-03-26 02:53:06.916094 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:53:06.916099 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:53:06.916104 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:53:06.916109 | orchestrator | 2026-03-26 02:53:06.916114 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-26 02:53:06.916119 | orchestrator | Thursday 26 March 2026 02:52:46 +0000 (0:00:00.753) 0:03:45.793 ******** 2026-03-26 02:53:06.916124 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:53:06.916130 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:53:06.916135 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:53:06.916140 | orchestrator | 2026-03-26 02:53:06.916144 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-26 02:53:06.916149 | orchestrator | Thursday 26 March 2026 02:52:46 +0000 (0:00:00.609) 0:03:46.403 ******** 2026-03-26 02:53:06.916154 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:53:06.916159 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:53:06.916164 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:53:06.916169 | orchestrator | 2026-03-26 02:53:06.916174 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-26 02:53:06.916179 | orchestrator | Thursday 26 March 2026 02:52:47 +0000 (0:00:00.373) 0:03:46.776 ******** 2026-03-26 02:53:06.916184 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:53:06.916189 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:53:06.916194 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:53:06.916199 | orchestrator | 2026-03-26 02:53:06.916204 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-26 02:53:06.916208 | orchestrator | Thursday 26 March 2026 02:52:47 +0000 (0:00:00.324) 0:03:47.101 ******** 2026-03-26 02:53:06.916213 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:53:06.916218 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:53:06.916223 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:53:06.916228 | orchestrator | 2026-03-26 02:53:06.916233 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-26 02:53:06.916238 | orchestrator | Thursday 26 March 2026 02:52:48 +0000 (0:00:00.576) 0:03:47.677 ******** 2026-03-26 02:53:06.916243 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:53:06.916265 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:53:06.916270 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:53:06.916275 | orchestrator | 2026-03-26 02:53:06.916280 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-26 02:53:06.916285 | orchestrator | Thursday 26 March 2026 02:52:48 +0000 (0:00:00.356) 0:03:48.033 ******** 2026-03-26 02:53:06.916289 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:53:06.916294 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:53:06.916299 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:53:06.916304 | orchestrator | 2026-03-26 02:53:06.916309 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-26 02:53:06.916313 | orchestrator | Thursday 26 March 2026 02:52:48 +0000 (0:00:00.355) 0:03:48.389 ******** 2026-03-26 02:53:06.916318 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:53:06.916323 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:53:06.916328 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:53:06.916333 | orchestrator | 2026-03-26 02:53:06.916338 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-26 02:53:06.916342 | orchestrator | Thursday 26 March 2026 02:52:49 +0000 (0:00:00.348) 0:03:48.737 ******** 2026-03-26 02:53:06.916347 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:53:06.916352 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:53:06.916357 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:53:06.916362 | orchestrator | 2026-03-26 02:53:06.916367 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-26 02:53:06.916371 | orchestrator | Thursday 26 March 2026 02:52:49 +0000 (0:00:00.666) 0:03:49.404 ******** 2026-03-26 02:53:06.916376 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:53:06.916381 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:53:06.916386 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:53:06.916391 | orchestrator | 2026-03-26 02:53:06.916395 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-26 02:53:06.916400 | orchestrator | Thursday 26 March 2026 02:52:50 +0000 (0:00:00.379) 0:03:49.784 ******** 2026-03-26 02:53:06.916405 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:53:06.916410 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:53:06.916415 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:53:06.916419 | orchestrator | 2026-03-26 02:53:06.916434 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-03-26 02:53:06.916440 | orchestrator | Thursday 26 March 2026 02:52:50 +0000 (0:00:00.593) 0:03:50.377 ******** 2026-03-26 02:53:06.916445 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:53:06.916449 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:53:06.916454 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:53:06.916459 | orchestrator | 2026-03-26 02:53:06.916464 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-03-26 02:53:06.916469 | orchestrator | Thursday 26 March 2026 02:52:51 +0000 (0:00:00.624) 0:03:51.002 ******** 2026-03-26 02:53:06.916474 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 02:53:06.916479 | orchestrator | 2026-03-26 02:53:06.916484 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-03-26 02:53:06.916489 | orchestrator | Thursday 26 March 2026 02:52:52 +0000 (0:00:00.610) 0:03:51.613 ******** 2026-03-26 02:53:06.916494 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:53:06.916499 | orchestrator | 2026-03-26 02:53:06.916504 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-03-26 02:53:06.916518 | orchestrator | Thursday 26 March 2026 02:52:52 +0000 (0:00:00.156) 0:03:51.769 ******** 2026-03-26 02:53:06.916524 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-26 02:53:06.916529 | orchestrator | 2026-03-26 02:53:06.916534 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-03-26 02:53:06.916539 | orchestrator | Thursday 26 March 2026 02:52:53 +0000 (0:00:01.067) 0:03:52.836 ******** 2026-03-26 02:53:06.916549 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:53:06.916613 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:53:06.916619 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:53:06.916625 | orchestrator | 2026-03-26 02:53:06.916631 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-03-26 02:53:06.916636 | orchestrator | Thursday 26 March 2026 02:52:53 +0000 (0:00:00.613) 0:03:53.450 ******** 2026-03-26 02:53:06.916642 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:53:06.916648 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:53:06.916653 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:53:06.916659 | orchestrator | 2026-03-26 02:53:06.916664 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-03-26 02:53:06.916670 | orchestrator | Thursday 26 March 2026 02:52:54 +0000 (0:00:00.378) 0:03:53.828 ******** 2026-03-26 02:53:06.916676 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:53:06.916682 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:53:06.916687 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:53:06.916693 | orchestrator | 2026-03-26 02:53:06.916699 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-03-26 02:53:06.916704 | orchestrator | Thursday 26 March 2026 02:52:55 +0000 (0:00:01.161) 0:03:54.990 ******** 2026-03-26 02:53:06.916710 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:53:06.916715 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:53:06.916721 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:53:06.916727 | orchestrator | 2026-03-26 02:53:06.916733 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-03-26 02:53:06.916738 | orchestrator | Thursday 26 March 2026 02:52:56 +0000 (0:00:00.830) 0:03:55.820 ******** 2026-03-26 02:53:06.916744 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:53:06.916749 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:53:06.916755 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:53:06.916761 | orchestrator | 2026-03-26 02:53:06.916766 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-03-26 02:53:06.916772 | orchestrator | Thursday 26 March 2026 02:52:57 +0000 (0:00:00.950) 0:03:56.771 ******** 2026-03-26 02:53:06.916778 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:53:06.916783 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:53:06.916789 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:53:06.916795 | orchestrator | 2026-03-26 02:53:06.916800 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-03-26 02:53:06.916806 | orchestrator | Thursday 26 March 2026 02:52:57 +0000 (0:00:00.730) 0:03:57.501 ******** 2026-03-26 02:53:06.916812 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:53:06.916817 | orchestrator | 2026-03-26 02:53:06.916823 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-03-26 02:53:06.916829 | orchestrator | Thursday 26 March 2026 02:52:59 +0000 (0:00:01.250) 0:03:58.752 ******** 2026-03-26 02:53:06.916834 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:53:06.916840 | orchestrator | 2026-03-26 02:53:06.916845 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-03-26 02:53:06.916851 | orchestrator | Thursday 26 March 2026 02:52:59 +0000 (0:00:00.730) 0:03:59.483 ******** 2026-03-26 02:53:06.916856 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-26 02:53:06.916862 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-26 02:53:06.916867 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-26 02:53:06.916873 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-26 02:53:06.916879 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-03-26 02:53:06.916885 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-26 02:53:06.916890 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-26 02:53:06.916896 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2026-03-26 02:53:06.916901 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-26 02:53:06.916921 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2026-03-26 02:53:06.916926 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-03-26 02:53:06.916931 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2026-03-26 02:53:06.916936 | orchestrator | 2026-03-26 02:53:06.916941 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-03-26 02:53:06.916945 | orchestrator | Thursday 26 March 2026 02:53:03 +0000 (0:00:03.125) 0:04:02.608 ******** 2026-03-26 02:53:06.916950 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:53:06.916955 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:53:06.916963 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:53:06.916969 | orchestrator | 2026-03-26 02:53:06.916973 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-03-26 02:53:06.916978 | orchestrator | Thursday 26 March 2026 02:53:04 +0000 (0:00:01.266) 0:04:03.874 ******** 2026-03-26 02:53:06.916983 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:53:06.916988 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:53:06.916993 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:53:06.916998 | orchestrator | 2026-03-26 02:53:06.917003 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-03-26 02:53:06.917008 | orchestrator | Thursday 26 March 2026 02:53:05 +0000 (0:00:00.657) 0:04:04.532 ******** 2026-03-26 02:53:06.917013 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:53:06.917018 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:53:06.917023 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:53:06.917028 | orchestrator | 2026-03-26 02:53:06.917033 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-03-26 02:53:06.917038 | orchestrator | Thursday 26 March 2026 02:53:05 +0000 (0:00:00.355) 0:04:04.887 ******** 2026-03-26 02:53:06.917043 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:53:06.917047 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:53:06.917052 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:53:06.917057 | orchestrator | 2026-03-26 02:53:06.917066 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-03-26 02:54:09.109030 | orchestrator | Thursday 26 March 2026 02:53:06 +0000 (0:00:01.533) 0:04:06.420 ******** 2026-03-26 02:54:09.109124 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:54:09.109134 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:54:09.109141 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:54:09.109148 | orchestrator | 2026-03-26 02:54:09.109155 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-03-26 02:54:09.109162 | orchestrator | Thursday 26 March 2026 02:53:08 +0000 (0:00:01.632) 0:04:08.052 ******** 2026-03-26 02:54:09.109168 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:54:09.109175 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:54:09.109182 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:54:09.109188 | orchestrator | 2026-03-26 02:54:09.109194 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-03-26 02:54:09.109201 | orchestrator | Thursday 26 March 2026 02:53:08 +0000 (0:00:00.334) 0:04:08.387 ******** 2026-03-26 02:54:09.109208 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 02:54:09.109215 | orchestrator | 2026-03-26 02:54:09.109222 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-03-26 02:54:09.109228 | orchestrator | Thursday 26 March 2026 02:53:09 +0000 (0:00:00.567) 0:04:08.954 ******** 2026-03-26 02:54:09.109235 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:54:09.109241 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:54:09.109248 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:54:09.109254 | orchestrator | 2026-03-26 02:54:09.109261 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-03-26 02:54:09.109267 | orchestrator | Thursday 26 March 2026 02:53:10 +0000 (0:00:00.631) 0:04:09.585 ******** 2026-03-26 02:54:09.109273 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:54:09.109302 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:54:09.109313 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:54:09.109328 | orchestrator | 2026-03-26 02:54:09.109338 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-03-26 02:54:09.109348 | orchestrator | Thursday 26 March 2026 02:53:10 +0000 (0:00:00.350) 0:04:09.936 ******** 2026-03-26 02:54:09.109358 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 02:54:09.109370 | orchestrator | 2026-03-26 02:54:09.109379 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-03-26 02:54:09.109388 | orchestrator | Thursday 26 March 2026 02:53:11 +0000 (0:00:00.590) 0:04:10.527 ******** 2026-03-26 02:54:09.109399 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:54:09.109408 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:54:09.109418 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:54:09.109428 | orchestrator | 2026-03-26 02:54:09.109438 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-03-26 02:54:09.109449 | orchestrator | Thursday 26 March 2026 02:53:13 +0000 (0:00:02.129) 0:04:12.656 ******** 2026-03-26 02:54:09.109459 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:54:09.109469 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:54:09.109480 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:54:09.109491 | orchestrator | 2026-03-26 02:54:09.109498 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-03-26 02:54:09.109504 | orchestrator | Thursday 26 March 2026 02:53:14 +0000 (0:00:01.196) 0:04:13.852 ******** 2026-03-26 02:54:09.109510 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:54:09.109517 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:54:09.109523 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:54:09.109530 | orchestrator | 2026-03-26 02:54:09.109536 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-03-26 02:54:09.109542 | orchestrator | Thursday 26 March 2026 02:53:16 +0000 (0:00:01.859) 0:04:15.712 ******** 2026-03-26 02:54:09.109549 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:54:09.109555 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:54:09.109562 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:54:09.109568 | orchestrator | 2026-03-26 02:54:09.109576 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-03-26 02:54:09.109583 | orchestrator | Thursday 26 March 2026 02:53:18 +0000 (0:00:01.955) 0:04:17.667 ******** 2026-03-26 02:54:09.109591 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 02:54:09.109598 | orchestrator | 2026-03-26 02:54:09.109606 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-03-26 02:54:09.109617 | orchestrator | Thursday 26 March 2026 02:53:19 +0000 (0:00:00.897) 0:04:18.565 ******** 2026-03-26 02:54:09.109686 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-03-26 02:54:09.109698 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:54:09.109710 | orchestrator | 2026-03-26 02:54:09.109720 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-03-26 02:54:09.109732 | orchestrator | Thursday 26 March 2026 02:53:40 +0000 (0:00:21.851) 0:04:40.416 ******** 2026-03-26 02:54:09.109743 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:54:09.109755 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:54:09.109764 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:54:09.109772 | orchestrator | 2026-03-26 02:54:09.109779 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-03-26 02:54:09.109786 | orchestrator | Thursday 26 March 2026 02:53:49 +0000 (0:00:08.986) 0:04:49.402 ******** 2026-03-26 02:54:09.109794 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:54:09.109801 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:54:09.109808 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:54:09.109825 | orchestrator | 2026-03-26 02:54:09.109833 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-03-26 02:54:09.109840 | orchestrator | Thursday 26 March 2026 02:53:50 +0000 (0:00:00.363) 0:04:49.766 ******** 2026-03-26 02:54:09.109865 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__e255bad6a7d49c7c14086d2eafbc7336e14b386d'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-03-26 02:54:09.109915 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__e255bad6a7d49c7c14086d2eafbc7336e14b386d'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-03-26 02:54:09.109933 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__e255bad6a7d49c7c14086d2eafbc7336e14b386d'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-03-26 02:54:09.109942 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__e255bad6a7d49c7c14086d2eafbc7336e14b386d'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-03-26 02:54:09.109949 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__e255bad6a7d49c7c14086d2eafbc7336e14b386d'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-03-26 02:54:09.109957 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__e255bad6a7d49c7c14086d2eafbc7336e14b386d'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__e255bad6a7d49c7c14086d2eafbc7336e14b386d'}])  2026-03-26 02:54:09.109964 | orchestrator | 2026-03-26 02:54:09.109971 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-26 02:54:09.109977 | orchestrator | Thursday 26 March 2026 02:54:05 +0000 (0:00:14.915) 0:05:04.681 ******** 2026-03-26 02:54:09.109983 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:54:09.109990 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:54:09.109996 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:54:09.110002 | orchestrator | 2026-03-26 02:54:09.110009 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-03-26 02:54:09.110059 | orchestrator | Thursday 26 March 2026 02:54:05 +0000 (0:00:00.452) 0:05:05.134 ******** 2026-03-26 02:54:09.110066 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 02:54:09.110072 | orchestrator | 2026-03-26 02:54:09.110079 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-03-26 02:54:09.110085 | orchestrator | Thursday 26 March 2026 02:54:06 +0000 (0:00:00.870) 0:05:06.004 ******** 2026-03-26 02:54:09.110091 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:54:09.110098 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:54:09.110104 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:54:09.110111 | orchestrator | 2026-03-26 02:54:09.110117 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-03-26 02:54:09.110129 | orchestrator | Thursday 26 March 2026 02:54:06 +0000 (0:00:00.357) 0:05:06.362 ******** 2026-03-26 02:54:09.110165 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:54:09.110172 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:54:09.110178 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:54:09.110184 | orchestrator | 2026-03-26 02:54:09.110190 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-03-26 02:54:09.110197 | orchestrator | Thursday 26 March 2026 02:54:07 +0000 (0:00:00.385) 0:05:06.748 ******** 2026-03-26 02:54:09.110203 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-26 02:54:09.110210 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-26 02:54:09.110216 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-26 02:54:09.110222 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:54:09.110228 | orchestrator | 2026-03-26 02:54:09.110234 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-03-26 02:54:09.110241 | orchestrator | Thursday 26 March 2026 02:54:08 +0000 (0:00:00.959) 0:05:07.707 ******** 2026-03-26 02:54:09.110247 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:54:09.110253 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:54:09.110259 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:54:09.110266 | orchestrator | 2026-03-26 02:54:09.110272 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2026-03-26 02:54:09.110278 | orchestrator | 2026-03-26 02:54:09.110290 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-26 02:54:36.866719 | orchestrator | Thursday 26 March 2026 02:54:09 +0000 (0:00:00.899) 0:05:08.607 ******** 2026-03-26 02:54:36.866859 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 02:54:36.866887 | orchestrator | 2026-03-26 02:54:36.866907 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-26 02:54:36.866925 | orchestrator | Thursday 26 March 2026 02:54:09 +0000 (0:00:00.580) 0:05:09.188 ******** 2026-03-26 02:54:36.866943 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 02:54:36.866961 | orchestrator | 2026-03-26 02:54:36.866980 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-26 02:54:36.866997 | orchestrator | Thursday 26 March 2026 02:54:10 +0000 (0:00:00.840) 0:05:10.028 ******** 2026-03-26 02:54:36.867015 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:54:36.867034 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:54:36.867051 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:54:36.867068 | orchestrator | 2026-03-26 02:54:36.867086 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-26 02:54:36.867103 | orchestrator | Thursday 26 March 2026 02:54:11 +0000 (0:00:00.768) 0:05:10.796 ******** 2026-03-26 02:54:36.867120 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:54:36.867138 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:54:36.867156 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:54:36.867173 | orchestrator | 2026-03-26 02:54:36.867191 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-26 02:54:36.867209 | orchestrator | Thursday 26 March 2026 02:54:11 +0000 (0:00:00.352) 0:05:11.148 ******** 2026-03-26 02:54:36.867226 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:54:36.867243 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:54:36.867261 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:54:36.867279 | orchestrator | 2026-03-26 02:54:36.867298 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-26 02:54:36.867315 | orchestrator | Thursday 26 March 2026 02:54:12 +0000 (0:00:00.577) 0:05:11.726 ******** 2026-03-26 02:54:36.867333 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:54:36.867351 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:54:36.867400 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:54:36.867419 | orchestrator | 2026-03-26 02:54:36.867438 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-26 02:54:36.867455 | orchestrator | Thursday 26 March 2026 02:54:12 +0000 (0:00:00.354) 0:05:12.081 ******** 2026-03-26 02:54:36.867472 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:54:36.867490 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:54:36.867506 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:54:36.867524 | orchestrator | 2026-03-26 02:54:36.867542 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-26 02:54:36.867559 | orchestrator | Thursday 26 March 2026 02:54:13 +0000 (0:00:00.788) 0:05:12.870 ******** 2026-03-26 02:54:36.867577 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:54:36.867593 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:54:36.867610 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:54:36.867627 | orchestrator | 2026-03-26 02:54:36.867643 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-26 02:54:36.867683 | orchestrator | Thursday 26 March 2026 02:54:13 +0000 (0:00:00.335) 0:05:13.205 ******** 2026-03-26 02:54:36.867701 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:54:36.867717 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:54:36.867733 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:54:36.867749 | orchestrator | 2026-03-26 02:54:36.867765 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-26 02:54:36.867781 | orchestrator | Thursday 26 March 2026 02:54:14 +0000 (0:00:00.644) 0:05:13.850 ******** 2026-03-26 02:54:36.867798 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:54:36.867814 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:54:36.867830 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:54:36.867847 | orchestrator | 2026-03-26 02:54:36.867864 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-26 02:54:36.867880 | orchestrator | Thursday 26 March 2026 02:54:15 +0000 (0:00:00.770) 0:05:14.621 ******** 2026-03-26 02:54:36.867897 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:54:36.867914 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:54:36.867930 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:54:36.867946 | orchestrator | 2026-03-26 02:54:36.867963 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-26 02:54:36.867979 | orchestrator | Thursday 26 March 2026 02:54:15 +0000 (0:00:00.733) 0:05:15.354 ******** 2026-03-26 02:54:36.867995 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:54:36.868011 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:54:36.868046 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:54:36.868062 | orchestrator | 2026-03-26 02:54:36.868080 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-26 02:54:36.868096 | orchestrator | Thursday 26 March 2026 02:54:16 +0000 (0:00:00.328) 0:05:15.683 ******** 2026-03-26 02:54:36.868113 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:54:36.868125 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:54:36.868135 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:54:36.868144 | orchestrator | 2026-03-26 02:54:36.868154 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-26 02:54:36.868163 | orchestrator | Thursday 26 March 2026 02:54:16 +0000 (0:00:00.665) 0:05:16.348 ******** 2026-03-26 02:54:36.868173 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:54:36.868182 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:54:36.868192 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:54:36.868201 | orchestrator | 2026-03-26 02:54:36.868211 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-26 02:54:36.868220 | orchestrator | Thursday 26 March 2026 02:54:17 +0000 (0:00:00.390) 0:05:16.739 ******** 2026-03-26 02:54:36.868230 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:54:36.868239 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:54:36.868249 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:54:36.868259 | orchestrator | 2026-03-26 02:54:36.868299 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-26 02:54:36.868310 | orchestrator | Thursday 26 March 2026 02:54:17 +0000 (0:00:00.336) 0:05:17.075 ******** 2026-03-26 02:54:36.868319 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:54:36.868329 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:54:36.868338 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:54:36.868348 | orchestrator | 2026-03-26 02:54:36.868358 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-26 02:54:36.868367 | orchestrator | Thursday 26 March 2026 02:54:18 +0000 (0:00:00.613) 0:05:17.689 ******** 2026-03-26 02:54:36.868377 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:54:36.868386 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:54:36.868396 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:54:36.868405 | orchestrator | 2026-03-26 02:54:36.868415 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-26 02:54:36.868425 | orchestrator | Thursday 26 March 2026 02:54:18 +0000 (0:00:00.324) 0:05:18.014 ******** 2026-03-26 02:54:36.868434 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:54:36.868444 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:54:36.868453 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:54:36.868463 | orchestrator | 2026-03-26 02:54:36.868472 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-26 02:54:36.868482 | orchestrator | Thursday 26 March 2026 02:54:18 +0000 (0:00:00.341) 0:05:18.355 ******** 2026-03-26 02:54:36.868492 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:54:36.868501 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:54:36.868511 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:54:36.868520 | orchestrator | 2026-03-26 02:54:36.868530 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-26 02:54:36.868540 | orchestrator | Thursday 26 March 2026 02:54:19 +0000 (0:00:00.379) 0:05:18.735 ******** 2026-03-26 02:54:36.868549 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:54:36.868559 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:54:36.868568 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:54:36.868578 | orchestrator | 2026-03-26 02:54:36.868587 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-26 02:54:36.868597 | orchestrator | Thursday 26 March 2026 02:54:19 +0000 (0:00:00.632) 0:05:19.368 ******** 2026-03-26 02:54:36.868607 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:54:36.868616 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:54:36.868625 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:54:36.868635 | orchestrator | 2026-03-26 02:54:36.868644 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-03-26 02:54:36.868695 | orchestrator | Thursday 26 March 2026 02:54:20 +0000 (0:00:00.601) 0:05:19.969 ******** 2026-03-26 02:54:36.868712 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-26 02:54:36.868728 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-26 02:54:36.868743 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-26 02:54:36.868759 | orchestrator | 2026-03-26 02:54:36.868775 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-03-26 02:54:36.868792 | orchestrator | Thursday 26 March 2026 02:54:21 +0000 (0:00:00.911) 0:05:20.881 ******** 2026-03-26 02:54:36.868808 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 02:54:36.868824 | orchestrator | 2026-03-26 02:54:36.868839 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-03-26 02:54:36.868849 | orchestrator | Thursday 26 March 2026 02:54:22 +0000 (0:00:00.805) 0:05:21.686 ******** 2026-03-26 02:54:36.868859 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:54:36.868869 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:54:36.868878 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:54:36.868888 | orchestrator | 2026-03-26 02:54:36.868898 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-03-26 02:54:36.868917 | orchestrator | Thursday 26 March 2026 02:54:22 +0000 (0:00:00.742) 0:05:22.429 ******** 2026-03-26 02:54:36.868926 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:54:36.868936 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:54:36.868946 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:54:36.868956 | orchestrator | 2026-03-26 02:54:36.868965 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-03-26 02:54:36.868975 | orchestrator | Thursday 26 March 2026 02:54:23 +0000 (0:00:00.355) 0:05:22.784 ******** 2026-03-26 02:54:36.868985 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-26 02:54:36.868995 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-26 02:54:36.869005 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-26 02:54:36.869015 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-03-26 02:54:36.869024 | orchestrator | 2026-03-26 02:54:36.869040 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-03-26 02:54:36.869050 | orchestrator | Thursday 26 March 2026 02:54:33 +0000 (0:00:10.631) 0:05:33.415 ******** 2026-03-26 02:54:36.869060 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:54:36.869070 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:54:36.869080 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:54:36.869089 | orchestrator | 2026-03-26 02:54:36.869099 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-03-26 02:54:36.869109 | orchestrator | Thursday 26 March 2026 02:54:34 +0000 (0:00:00.641) 0:05:34.056 ******** 2026-03-26 02:54:36.869119 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-26 02:54:36.869128 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-26 02:54:36.869138 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-26 02:54:36.869148 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-03-26 02:54:36.869158 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-26 02:54:36.869168 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-26 02:54:36.869177 | orchestrator | 2026-03-26 02:54:36.869187 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-03-26 02:54:36.869205 | orchestrator | Thursday 26 March 2026 02:54:36 +0000 (0:00:02.303) 0:05:36.360 ******** 2026-03-26 02:55:34.784783 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-26 02:55:34.784913 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-26 02:55:34.784934 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-26 02:55:34.784948 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-26 02:55:34.784962 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-03-26 02:55:34.784975 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-03-26 02:55:34.784990 | orchestrator | 2026-03-26 02:55:34.785005 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-03-26 02:55:34.785020 | orchestrator | Thursday 26 March 2026 02:54:38 +0000 (0:00:01.397) 0:05:37.758 ******** 2026-03-26 02:55:34.785033 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:55:34.785047 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:55:34.785060 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:55:34.785072 | orchestrator | 2026-03-26 02:55:34.785084 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-03-26 02:55:34.785097 | orchestrator | Thursday 26 March 2026 02:54:39 +0000 (0:00:00.819) 0:05:38.577 ******** 2026-03-26 02:55:34.785110 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:55:34.785124 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:55:34.785137 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:55:34.785150 | orchestrator | 2026-03-26 02:55:34.785157 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-03-26 02:55:34.785165 | orchestrator | Thursday 26 March 2026 02:54:39 +0000 (0:00:00.606) 0:05:39.184 ******** 2026-03-26 02:55:34.785195 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:55:34.785203 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:55:34.785216 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:55:34.785234 | orchestrator | 2026-03-26 02:55:34.785248 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-03-26 02:55:34.785259 | orchestrator | Thursday 26 March 2026 02:54:40 +0000 (0:00:00.359) 0:05:39.544 ******** 2026-03-26 02:55:34.785273 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 02:55:34.785287 | orchestrator | 2026-03-26 02:55:34.785301 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-03-26 02:55:34.785313 | orchestrator | Thursday 26 March 2026 02:54:40 +0000 (0:00:00.604) 0:05:40.149 ******** 2026-03-26 02:55:34.785325 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:55:34.785334 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:55:34.785343 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:55:34.785356 | orchestrator | 2026-03-26 02:55:34.785368 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-03-26 02:55:34.785381 | orchestrator | Thursday 26 March 2026 02:54:41 +0000 (0:00:00.608) 0:05:40.757 ******** 2026-03-26 02:55:34.785394 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:55:34.785408 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:55:34.785420 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:55:34.785434 | orchestrator | 2026-03-26 02:55:34.785444 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-03-26 02:55:34.785462 | orchestrator | Thursday 26 March 2026 02:54:41 +0000 (0:00:00.367) 0:05:41.125 ******** 2026-03-26 02:55:34.785476 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 02:55:34.785490 | orchestrator | 2026-03-26 02:55:34.785505 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-03-26 02:55:34.785517 | orchestrator | Thursday 26 March 2026 02:54:42 +0000 (0:00:00.548) 0:05:41.674 ******** 2026-03-26 02:55:34.785531 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:55:34.785542 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:55:34.785554 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:55:34.785566 | orchestrator | 2026-03-26 02:55:34.785578 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-03-26 02:55:34.785590 | orchestrator | Thursday 26 March 2026 02:54:44 +0000 (0:00:01.999) 0:05:43.673 ******** 2026-03-26 02:55:34.785602 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:55:34.785613 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:55:34.785624 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:55:34.785636 | orchestrator | 2026-03-26 02:55:34.785648 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-03-26 02:55:34.785660 | orchestrator | Thursday 26 March 2026 02:54:45 +0000 (0:00:01.225) 0:05:44.898 ******** 2026-03-26 02:55:34.785671 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:55:34.785684 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:55:34.785698 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:55:34.785776 | orchestrator | 2026-03-26 02:55:34.785793 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-03-26 02:55:34.785815 | orchestrator | Thursday 26 March 2026 02:54:47 +0000 (0:00:01.786) 0:05:46.684 ******** 2026-03-26 02:55:34.785823 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:55:34.785830 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:55:34.785838 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:55:34.785845 | orchestrator | 2026-03-26 02:55:34.785852 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-03-26 02:55:34.785860 | orchestrator | Thursday 26 March 2026 02:54:49 +0000 (0:00:02.736) 0:05:49.420 ******** 2026-03-26 02:55:34.785867 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:55:34.785874 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:55:34.785882 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-03-26 02:55:34.785899 | orchestrator | 2026-03-26 02:55:34.785906 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-03-26 02:55:34.785913 | orchestrator | Thursday 26 March 2026 02:54:50 +0000 (0:00:00.728) 0:05:50.149 ******** 2026-03-26 02:55:34.785921 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2026-03-26 02:55:34.785928 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2026-03-26 02:55:34.785955 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2026-03-26 02:55:34.785963 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2026-03-26 02:55:34.785971 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-03-26 02:55:34.785978 | orchestrator | 2026-03-26 02:55:34.785986 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-03-26 02:55:34.785993 | orchestrator | Thursday 26 March 2026 02:55:14 +0000 (0:00:24.280) 0:06:14.429 ******** 2026-03-26 02:55:34.786000 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-03-26 02:55:34.786008 | orchestrator | 2026-03-26 02:55:34.786059 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-03-26 02:55:34.786067 | orchestrator | Thursday 26 March 2026 02:55:16 +0000 (0:00:01.301) 0:06:15.731 ******** 2026-03-26 02:55:34.786074 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:55:34.786081 | orchestrator | 2026-03-26 02:55:34.786089 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-03-26 02:55:34.786096 | orchestrator | Thursday 26 March 2026 02:55:16 +0000 (0:00:00.330) 0:06:16.061 ******** 2026-03-26 02:55:34.786109 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:55:34.786121 | orchestrator | 2026-03-26 02:55:34.786133 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-03-26 02:55:34.786145 | orchestrator | Thursday 26 March 2026 02:55:16 +0000 (0:00:00.174) 0:06:16.236 ******** 2026-03-26 02:55:34.786157 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2026-03-26 02:55:34.786168 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2026-03-26 02:55:34.786180 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2026-03-26 02:55:34.786192 | orchestrator | 2026-03-26 02:55:34.786204 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-03-26 02:55:34.786216 | orchestrator | Thursday 26 March 2026 02:55:23 +0000 (0:00:06.365) 0:06:22.602 ******** 2026-03-26 02:55:34.786228 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-03-26 02:55:34.786240 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2026-03-26 02:55:34.786253 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2026-03-26 02:55:34.786265 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-03-26 02:55:34.786279 | orchestrator | 2026-03-26 02:55:34.786291 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-26 02:55:34.786303 | orchestrator | Thursday 26 March 2026 02:55:28 +0000 (0:00:05.322) 0:06:27.925 ******** 2026-03-26 02:55:34.786315 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:55:34.786328 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:55:34.786340 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:55:34.786352 | orchestrator | 2026-03-26 02:55:34.786365 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-03-26 02:55:34.786377 | orchestrator | Thursday 26 March 2026 02:55:29 +0000 (0:00:00.706) 0:06:28.631 ******** 2026-03-26 02:55:34.786389 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 02:55:34.786401 | orchestrator | 2026-03-26 02:55:34.786424 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-03-26 02:55:34.786437 | orchestrator | Thursday 26 March 2026 02:55:30 +0000 (0:00:00.903) 0:06:29.534 ******** 2026-03-26 02:55:34.786449 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:55:34.786461 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:55:34.786472 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:55:34.786484 | orchestrator | 2026-03-26 02:55:34.786494 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-03-26 02:55:34.786506 | orchestrator | Thursday 26 March 2026 02:55:30 +0000 (0:00:00.370) 0:06:29.905 ******** 2026-03-26 02:55:34.786518 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:55:34.786530 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:55:34.786540 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:55:34.786547 | orchestrator | 2026-03-26 02:55:34.786554 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-03-26 02:55:34.786561 | orchestrator | Thursday 26 March 2026 02:55:31 +0000 (0:00:01.162) 0:06:31.068 ******** 2026-03-26 02:55:34.786568 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-26 02:55:34.786574 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-26 02:55:34.786587 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-26 02:55:34.786594 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:55:34.786601 | orchestrator | 2026-03-26 02:55:34.786608 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-03-26 02:55:34.786614 | orchestrator | Thursday 26 March 2026 02:55:32 +0000 (0:00:00.920) 0:06:31.989 ******** 2026-03-26 02:55:34.786621 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:55:34.786628 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:55:34.786635 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:55:34.786642 | orchestrator | 2026-03-26 02:55:34.786648 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2026-03-26 02:55:34.786655 | orchestrator | 2026-03-26 02:55:34.786662 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-26 02:55:34.786669 | orchestrator | Thursday 26 March 2026 02:55:33 +0000 (0:00:00.895) 0:06:32.884 ******** 2026-03-26 02:55:34.786676 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-26 02:55:34.786684 | orchestrator | 2026-03-26 02:55:34.786691 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-26 02:55:34.786698 | orchestrator | Thursday 26 March 2026 02:55:33 +0000 (0:00:00.594) 0:06:33.479 ******** 2026-03-26 02:55:34.786733 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-26 02:55:51.336048 | orchestrator | 2026-03-26 02:55:51.336147 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-26 02:55:51.336160 | orchestrator | Thursday 26 March 2026 02:55:34 +0000 (0:00:00.808) 0:06:34.288 ******** 2026-03-26 02:55:51.336167 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:55:51.336174 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:55:51.336182 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:55:51.336188 | orchestrator | 2026-03-26 02:55:51.336195 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-26 02:55:51.336202 | orchestrator | Thursday 26 March 2026 02:55:35 +0000 (0:00:00.389) 0:06:34.678 ******** 2026-03-26 02:55:51.336209 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:55:51.336217 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:55:51.336224 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:55:51.336230 | orchestrator | 2026-03-26 02:55:51.336236 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-26 02:55:51.336242 | orchestrator | Thursday 26 March 2026 02:55:35 +0000 (0:00:00.778) 0:06:35.457 ******** 2026-03-26 02:55:51.336249 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:55:51.336255 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:55:51.336279 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:55:51.336284 | orchestrator | 2026-03-26 02:55:51.336288 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-26 02:55:51.336292 | orchestrator | Thursday 26 March 2026 02:55:36 +0000 (0:00:00.694) 0:06:36.152 ******** 2026-03-26 02:55:51.336296 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:55:51.336299 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:55:51.336303 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:55:51.336307 | orchestrator | 2026-03-26 02:55:51.336311 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-26 02:55:51.336315 | orchestrator | Thursday 26 March 2026 02:55:37 +0000 (0:00:01.077) 0:06:37.229 ******** 2026-03-26 02:55:51.336319 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:55:51.336323 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:55:51.336327 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:55:51.336331 | orchestrator | 2026-03-26 02:55:51.336335 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-26 02:55:51.336339 | orchestrator | Thursday 26 March 2026 02:55:38 +0000 (0:00:00.346) 0:06:37.575 ******** 2026-03-26 02:55:51.336342 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:55:51.336346 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:55:51.336350 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:55:51.336354 | orchestrator | 2026-03-26 02:55:51.336358 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-26 02:55:51.336361 | orchestrator | Thursday 26 March 2026 02:55:38 +0000 (0:00:00.395) 0:06:37.971 ******** 2026-03-26 02:55:51.336365 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:55:51.336369 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:55:51.336373 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:55:51.336377 | orchestrator | 2026-03-26 02:55:51.336380 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-26 02:55:51.336384 | orchestrator | Thursday 26 March 2026 02:55:38 +0000 (0:00:00.328) 0:06:38.300 ******** 2026-03-26 02:55:51.336388 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:55:51.336392 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:55:51.336396 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:55:51.336400 | orchestrator | 2026-03-26 02:55:51.336404 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-26 02:55:51.336408 | orchestrator | Thursday 26 March 2026 02:55:39 +0000 (0:00:01.064) 0:06:39.364 ******** 2026-03-26 02:55:51.336411 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:55:51.336415 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:55:51.336419 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:55:51.336423 | orchestrator | 2026-03-26 02:55:51.336427 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-26 02:55:51.336430 | orchestrator | Thursday 26 March 2026 02:55:40 +0000 (0:00:00.726) 0:06:40.091 ******** 2026-03-26 02:55:51.336434 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:55:51.336438 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:55:51.336442 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:55:51.336446 | orchestrator | 2026-03-26 02:55:51.336450 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-26 02:55:51.336454 | orchestrator | Thursday 26 March 2026 02:55:40 +0000 (0:00:00.328) 0:06:40.419 ******** 2026-03-26 02:55:51.336457 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:55:51.336461 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:55:51.336465 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:55:51.336469 | orchestrator | 2026-03-26 02:55:51.336472 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-26 02:55:51.336486 | orchestrator | Thursday 26 March 2026 02:55:41 +0000 (0:00:00.360) 0:06:40.780 ******** 2026-03-26 02:55:51.336490 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:55:51.336494 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:55:51.336498 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:55:51.336502 | orchestrator | 2026-03-26 02:55:51.336509 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-26 02:55:51.336513 | orchestrator | Thursday 26 March 2026 02:55:41 +0000 (0:00:00.663) 0:06:41.444 ******** 2026-03-26 02:55:51.336517 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:55:51.336521 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:55:51.336525 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:55:51.336529 | orchestrator | 2026-03-26 02:55:51.336533 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-26 02:55:51.336537 | orchestrator | Thursday 26 March 2026 02:55:42 +0000 (0:00:00.355) 0:06:41.800 ******** 2026-03-26 02:55:51.336540 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:55:51.336544 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:55:51.336548 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:55:51.336552 | orchestrator | 2026-03-26 02:55:51.336556 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-26 02:55:51.336559 | orchestrator | Thursday 26 March 2026 02:55:42 +0000 (0:00:00.384) 0:06:42.184 ******** 2026-03-26 02:55:51.336563 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:55:51.336567 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:55:51.336571 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:55:51.336575 | orchestrator | 2026-03-26 02:55:51.336579 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-26 02:55:51.336592 | orchestrator | Thursday 26 March 2026 02:55:43 +0000 (0:00:00.333) 0:06:42.518 ******** 2026-03-26 02:55:51.336596 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:55:51.336600 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:55:51.336604 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:55:51.336607 | orchestrator | 2026-03-26 02:55:51.336611 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-26 02:55:51.336615 | orchestrator | Thursday 26 March 2026 02:55:43 +0000 (0:00:00.651) 0:06:43.169 ******** 2026-03-26 02:55:51.336620 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:55:51.336624 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:55:51.336629 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:55:51.336633 | orchestrator | 2026-03-26 02:55:51.336637 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-26 02:55:51.336642 | orchestrator | Thursday 26 March 2026 02:55:43 +0000 (0:00:00.343) 0:06:43.513 ******** 2026-03-26 02:55:51.336646 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:55:51.336651 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:55:51.336655 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:55:51.336659 | orchestrator | 2026-03-26 02:55:51.336664 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-26 02:55:51.336668 | orchestrator | Thursday 26 March 2026 02:55:44 +0000 (0:00:00.361) 0:06:43.874 ******** 2026-03-26 02:55:51.336672 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:55:51.336676 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:55:51.336681 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:55:51.336685 | orchestrator | 2026-03-26 02:55:51.336690 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-03-26 02:55:51.336694 | orchestrator | Thursday 26 March 2026 02:55:45 +0000 (0:00:00.879) 0:06:44.754 ******** 2026-03-26 02:55:51.336699 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:55:51.336703 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:55:51.336707 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:55:51.336712 | orchestrator | 2026-03-26 02:55:51.336718 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-03-26 02:55:51.336724 | orchestrator | Thursday 26 March 2026 02:55:45 +0000 (0:00:00.347) 0:06:45.101 ******** 2026-03-26 02:55:51.336759 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-26 02:55:51.336766 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-26 02:55:51.336770 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-26 02:55:51.336779 | orchestrator | 2026-03-26 02:55:51.336783 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-03-26 02:55:51.336787 | orchestrator | Thursday 26 March 2026 02:55:46 +0000 (0:00:00.914) 0:06:46.015 ******** 2026-03-26 02:55:51.336792 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-26 02:55:51.336797 | orchestrator | 2026-03-26 02:55:51.336801 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-03-26 02:55:51.336805 | orchestrator | Thursday 26 March 2026 02:55:47 +0000 (0:00:00.837) 0:06:46.853 ******** 2026-03-26 02:55:51.336810 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:55:51.336814 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:55:51.336818 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:55:51.336822 | orchestrator | 2026-03-26 02:55:51.336827 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-03-26 02:55:51.336831 | orchestrator | Thursday 26 March 2026 02:55:47 +0000 (0:00:00.338) 0:06:47.191 ******** 2026-03-26 02:55:51.336836 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:55:51.336840 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:55:51.336844 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:55:51.336848 | orchestrator | 2026-03-26 02:55:51.336853 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-03-26 02:55:51.336857 | orchestrator | Thursday 26 March 2026 02:55:48 +0000 (0:00:00.355) 0:06:47.547 ******** 2026-03-26 02:55:51.336861 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:55:51.336866 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:55:51.336870 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:55:51.336874 | orchestrator | 2026-03-26 02:55:51.336879 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-03-26 02:55:51.336883 | orchestrator | Thursday 26 March 2026 02:55:48 +0000 (0:00:00.654) 0:06:48.202 ******** 2026-03-26 02:55:51.336887 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:55:51.336891 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:55:51.336894 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:55:51.336898 | orchestrator | 2026-03-26 02:55:51.336905 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-03-26 02:55:51.336909 | orchestrator | Thursday 26 March 2026 02:55:49 +0000 (0:00:00.653) 0:06:48.855 ******** 2026-03-26 02:55:51.336913 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-26 02:55:51.336918 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-26 02:55:51.336922 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-26 02:55:51.336925 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-26 02:55:51.336929 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-26 02:55:51.336933 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-26 02:55:51.336937 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-26 02:55:51.336941 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-26 02:55:51.336944 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-26 02:55:51.336951 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-26 02:56:58.420211 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-26 02:56:58.420305 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-26 02:56:58.420316 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-26 02:56:58.420326 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-26 02:56:58.420350 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-26 02:56:58.420358 | orchestrator | 2026-03-26 02:56:58.420366 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-03-26 02:56:58.420373 | orchestrator | Thursday 26 March 2026 02:55:51 +0000 (0:00:01.974) 0:06:50.830 ******** 2026-03-26 02:56:58.420381 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:56:58.420389 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:56:58.420397 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:56:58.420404 | orchestrator | 2026-03-26 02:56:58.420411 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-03-26 02:56:58.420418 | orchestrator | Thursday 26 March 2026 02:55:51 +0000 (0:00:00.334) 0:06:51.165 ******** 2026-03-26 02:56:58.420426 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-26 02:56:58.420434 | orchestrator | 2026-03-26 02:56:58.420441 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-03-26 02:56:58.420449 | orchestrator | Thursday 26 March 2026 02:55:52 +0000 (0:00:00.852) 0:06:52.017 ******** 2026-03-26 02:56:58.420456 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-26 02:56:58.420463 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-26 02:56:58.420470 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-26 02:56:58.420478 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-03-26 02:56:58.420486 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-03-26 02:56:58.420494 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-03-26 02:56:58.420501 | orchestrator | 2026-03-26 02:56:58.420508 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-03-26 02:56:58.420515 | orchestrator | Thursday 26 March 2026 02:55:53 +0000 (0:00:01.019) 0:06:53.037 ******** 2026-03-26 02:56:58.420522 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-26 02:56:58.420529 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-26 02:56:58.420536 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-26 02:56:58.420544 | orchestrator | 2026-03-26 02:56:58.420551 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-03-26 02:56:58.420558 | orchestrator | Thursday 26 March 2026 02:55:55 +0000 (0:00:02.033) 0:06:55.070 ******** 2026-03-26 02:56:58.420565 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-26 02:56:58.420573 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-26 02:56:58.420580 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:56:58.420587 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-26 02:56:58.420594 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-26 02:56:58.420601 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:56:58.420608 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-26 02:56:58.420615 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-26 02:56:58.420623 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:56:58.420630 | orchestrator | 2026-03-26 02:56:58.420637 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-03-26 02:56:58.420644 | orchestrator | Thursday 26 March 2026 02:55:56 +0000 (0:00:01.153) 0:06:56.224 ******** 2026-03-26 02:56:58.420651 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-26 02:56:58.420659 | orchestrator | 2026-03-26 02:56:58.420666 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-03-26 02:56:58.420673 | orchestrator | Thursday 26 March 2026 02:55:58 +0000 (0:00:02.086) 0:06:58.311 ******** 2026-03-26 02:56:58.420686 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-26 02:56:58.420694 | orchestrator | 2026-03-26 02:56:58.420706 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2026-03-26 02:56:58.420713 | orchestrator | Thursday 26 March 2026 02:55:59 +0000 (0:00:00.966) 0:06:59.277 ******** 2026-03-26 02:56:58.420721 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-93e8c9a2-b6ff-5fe0-a79e-2922336c3e0a', 'data_vg': 'ceph-93e8c9a2-b6ff-5fe0-a79e-2922336c3e0a'}) 2026-03-26 02:56:58.420730 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-83c4def8-4703-5f7c-9549-7666ff9f2b66', 'data_vg': 'ceph-83c4def8-4703-5f7c-9549-7666ff9f2b66'}) 2026-03-26 02:56:58.420740 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-a652979e-9f40-503a-bbc8-6de5e605991e', 'data_vg': 'ceph-a652979e-9f40-503a-bbc8-6de5e605991e'}) 2026-03-26 02:56:58.420752 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-e2623153-bc41-510f-8884-ef957bb96082', 'data_vg': 'ceph-e2623153-bc41-510f-8884-ef957bb96082'}) 2026-03-26 02:56:58.420763 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-1fd8de68-da37-5e01-9bf2-5a04fcdcd771', 'data_vg': 'ceph-1fd8de68-da37-5e01-9bf2-5a04fcdcd771'}) 2026-03-26 02:56:58.420791 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-b5eee7c3-8883-5bbe-be5a-75726e822543', 'data_vg': 'ceph-b5eee7c3-8883-5bbe-be5a-75726e822543'}) 2026-03-26 02:56:58.420819 | orchestrator | 2026-03-26 02:56:58.420826 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-03-26 02:56:58.420834 | orchestrator | Thursday 26 March 2026 02:56:40 +0000 (0:00:41.005) 0:07:40.283 ******** 2026-03-26 02:56:58.420841 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:56:58.420852 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:56:58.420864 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:56:58.420876 | orchestrator | 2026-03-26 02:56:58.420888 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-03-26 02:56:58.420899 | orchestrator | Thursday 26 March 2026 02:56:41 +0000 (0:00:00.358) 0:07:40.642 ******** 2026-03-26 02:56:58.420909 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-26 02:56:58.420921 | orchestrator | 2026-03-26 02:56:58.420932 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-03-26 02:56:58.420943 | orchestrator | Thursday 26 March 2026 02:56:41 +0000 (0:00:00.842) 0:07:41.484 ******** 2026-03-26 02:56:58.420954 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:56:58.420965 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:56:58.420975 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:56:58.420986 | orchestrator | 2026-03-26 02:56:58.420998 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-03-26 02:56:58.421010 | orchestrator | Thursday 26 March 2026 02:56:42 +0000 (0:00:00.722) 0:07:42.206 ******** 2026-03-26 02:56:58.421023 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:56:58.421035 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:56:58.421046 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:56:58.421058 | orchestrator | 2026-03-26 02:56:58.421064 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-03-26 02:56:58.421071 | orchestrator | Thursday 26 March 2026 02:56:45 +0000 (0:00:02.698) 0:07:44.905 ******** 2026-03-26 02:56:58.421077 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-26 02:56:58.421085 | orchestrator | 2026-03-26 02:56:58.421098 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-03-26 02:56:58.421110 | orchestrator | Thursday 26 March 2026 02:56:46 +0000 (0:00:00.912) 0:07:45.817 ******** 2026-03-26 02:56:58.421120 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:56:58.421128 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:56:58.421135 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:56:58.421141 | orchestrator | 2026-03-26 02:56:58.421148 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-03-26 02:56:58.421154 | orchestrator | Thursday 26 March 2026 02:56:47 +0000 (0:00:01.197) 0:07:47.014 ******** 2026-03-26 02:56:58.421165 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:56:58.421172 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:56:58.421178 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:56:58.421184 | orchestrator | 2026-03-26 02:56:58.421191 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-03-26 02:56:58.421197 | orchestrator | Thursday 26 March 2026 02:56:48 +0000 (0:00:01.249) 0:07:48.264 ******** 2026-03-26 02:56:58.421204 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:56:58.421210 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:56:58.421217 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:56:58.421223 | orchestrator | 2026-03-26 02:56:58.421230 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-03-26 02:56:58.421236 | orchestrator | Thursday 26 March 2026 02:56:50 +0000 (0:00:02.004) 0:07:50.269 ******** 2026-03-26 02:56:58.421243 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:56:58.421249 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:56:58.421256 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:56:58.421262 | orchestrator | 2026-03-26 02:56:58.421269 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-03-26 02:56:58.421275 | orchestrator | Thursday 26 March 2026 02:56:51 +0000 (0:00:00.340) 0:07:50.609 ******** 2026-03-26 02:56:58.421282 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:56:58.421288 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:56:58.421295 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:56:58.421301 | orchestrator | 2026-03-26 02:56:58.421307 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-03-26 02:56:58.421314 | orchestrator | Thursday 26 March 2026 02:56:51 +0000 (0:00:00.352) 0:07:50.962 ******** 2026-03-26 02:56:58.421320 | orchestrator | ok: [testbed-node-3] => (item=3) 2026-03-26 02:56:58.421330 | orchestrator | ok: [testbed-node-4] => (item=5) 2026-03-26 02:56:58.421336 | orchestrator | ok: [testbed-node-5] => (item=1) 2026-03-26 02:56:58.421343 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-26 02:56:58.421350 | orchestrator | ok: [testbed-node-4] => (item=2) 2026-03-26 02:56:58.421356 | orchestrator | ok: [testbed-node-5] => (item=4) 2026-03-26 02:56:58.421362 | orchestrator | 2026-03-26 02:56:58.421369 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-03-26 02:56:58.421375 | orchestrator | Thursday 26 March 2026 02:56:52 +0000 (0:00:01.029) 0:07:51.991 ******** 2026-03-26 02:56:58.421382 | orchestrator | changed: [testbed-node-3] => (item=3) 2026-03-26 02:56:58.421388 | orchestrator | changed: [testbed-node-4] => (item=5) 2026-03-26 02:56:58.421395 | orchestrator | changed: [testbed-node-5] => (item=1) 2026-03-26 02:56:58.421401 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-03-26 02:56:58.421408 | orchestrator | changed: [testbed-node-4] => (item=2) 2026-03-26 02:56:58.421414 | orchestrator | changed: [testbed-node-5] => (item=4) 2026-03-26 02:56:58.421421 | orchestrator | 2026-03-26 02:56:58.421427 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-03-26 02:56:58.421434 | orchestrator | Thursday 26 March 2026 02:56:54 +0000 (0:00:02.493) 0:07:54.484 ******** 2026-03-26 02:56:58.421440 | orchestrator | changed: [testbed-node-3] => (item=3) 2026-03-26 02:56:58.421447 | orchestrator | changed: [testbed-node-4] => (item=5) 2026-03-26 02:56:58.421453 | orchestrator | changed: [testbed-node-5] => (item=1) 2026-03-26 02:56:58.421460 | orchestrator | changed: [testbed-node-5] => (item=4) 2026-03-26 02:56:58.421470 | orchestrator | changed: [testbed-node-4] => (item=2) 2026-03-26 02:57:31.576105 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-03-26 02:57:31.576196 | orchestrator | 2026-03-26 02:57:31.576207 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-03-26 02:57:31.576218 | orchestrator | Thursday 26 March 2026 02:56:58 +0000 (0:00:03.433) 0:07:57.918 ******** 2026-03-26 02:57:31.576225 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:57:31.576232 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:57:31.576262 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-26 02:57:31.576273 | orchestrator | 2026-03-26 02:57:31.576280 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-03-26 02:57:31.576288 | orchestrator | Thursday 26 March 2026 02:57:01 +0000 (0:00:03.095) 0:08:01.014 ******** 2026-03-26 02:57:31.576294 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:57:31.576301 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:57:31.576308 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-03-26 02:57:31.576317 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-26 02:57:31.576324 | orchestrator | 2026-03-26 02:57:31.576330 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-03-26 02:57:31.576338 | orchestrator | Thursday 26 March 2026 02:57:14 +0000 (0:00:12.703) 0:08:13.717 ******** 2026-03-26 02:57:31.576345 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:57:31.576352 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:57:31.576359 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:57:31.576366 | orchestrator | 2026-03-26 02:57:31.576373 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-26 02:57:31.576381 | orchestrator | Thursday 26 March 2026 02:57:15 +0000 (0:00:01.343) 0:08:15.061 ******** 2026-03-26 02:57:31.576388 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:57:31.576395 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:57:31.576402 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:57:31.576408 | orchestrator | 2026-03-26 02:57:31.576416 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-03-26 02:57:31.576423 | orchestrator | Thursday 26 March 2026 02:57:16 +0000 (0:00:00.684) 0:08:15.745 ******** 2026-03-26 02:57:31.576431 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-26 02:57:31.576439 | orchestrator | 2026-03-26 02:57:31.576447 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-03-26 02:57:31.576454 | orchestrator | Thursday 26 March 2026 02:57:16 +0000 (0:00:00.569) 0:08:16.315 ******** 2026-03-26 02:57:31.576462 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-26 02:57:31.576469 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-26 02:57:31.576477 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-26 02:57:31.576485 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:57:31.576493 | orchestrator | 2026-03-26 02:57:31.576500 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-03-26 02:57:31.576509 | orchestrator | Thursday 26 March 2026 02:57:17 +0000 (0:00:00.415) 0:08:16.731 ******** 2026-03-26 02:57:31.576515 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:57:31.576519 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:57:31.576524 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:57:31.576529 | orchestrator | 2026-03-26 02:57:31.576533 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-03-26 02:57:31.576538 | orchestrator | Thursday 26 March 2026 02:57:17 +0000 (0:00:00.374) 0:08:17.105 ******** 2026-03-26 02:57:31.576543 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:57:31.576547 | orchestrator | 2026-03-26 02:57:31.576552 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-03-26 02:57:31.576557 | orchestrator | Thursday 26 March 2026 02:57:17 +0000 (0:00:00.252) 0:08:17.358 ******** 2026-03-26 02:57:31.576562 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:57:31.576566 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:57:31.576571 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:57:31.576575 | orchestrator | 2026-03-26 02:57:31.576580 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-03-26 02:57:31.576584 | orchestrator | Thursday 26 March 2026 02:57:18 +0000 (0:00:00.733) 0:08:18.091 ******** 2026-03-26 02:57:31.576598 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:57:31.576602 | orchestrator | 2026-03-26 02:57:31.576618 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-03-26 02:57:31.576623 | orchestrator | Thursday 26 March 2026 02:57:18 +0000 (0:00:00.263) 0:08:18.355 ******** 2026-03-26 02:57:31.576628 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:57:31.576632 | orchestrator | 2026-03-26 02:57:31.576639 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-03-26 02:57:31.576646 | orchestrator | Thursday 26 March 2026 02:57:19 +0000 (0:00:00.258) 0:08:18.614 ******** 2026-03-26 02:57:31.576653 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:57:31.576661 | orchestrator | 2026-03-26 02:57:31.576668 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-03-26 02:57:31.576675 | orchestrator | Thursday 26 March 2026 02:57:19 +0000 (0:00:00.163) 0:08:18.777 ******** 2026-03-26 02:57:31.576682 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:57:31.576689 | orchestrator | 2026-03-26 02:57:31.576696 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-03-26 02:57:31.576703 | orchestrator | Thursday 26 March 2026 02:57:19 +0000 (0:00:00.246) 0:08:19.024 ******** 2026-03-26 02:57:31.576710 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:57:31.576717 | orchestrator | 2026-03-26 02:57:31.576724 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-03-26 02:57:31.576731 | orchestrator | Thursday 26 March 2026 02:57:19 +0000 (0:00:00.263) 0:08:19.288 ******** 2026-03-26 02:57:31.576738 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-26 02:57:31.576746 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-26 02:57:31.576772 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-26 02:57:31.576780 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:57:31.576787 | orchestrator | 2026-03-26 02:57:31.576795 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-03-26 02:57:31.576802 | orchestrator | Thursday 26 March 2026 02:57:20 +0000 (0:00:00.432) 0:08:19.720 ******** 2026-03-26 02:57:31.576809 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:57:31.576816 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:57:31.576823 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:57:31.576856 | orchestrator | 2026-03-26 02:57:31.576864 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-03-26 02:57:31.576872 | orchestrator | Thursday 26 March 2026 02:57:20 +0000 (0:00:00.640) 0:08:20.361 ******** 2026-03-26 02:57:31.576880 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:57:31.576887 | orchestrator | 2026-03-26 02:57:31.576894 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-03-26 02:57:31.576901 | orchestrator | Thursday 26 March 2026 02:57:21 +0000 (0:00:00.259) 0:08:20.621 ******** 2026-03-26 02:57:31.576909 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:57:31.576916 | orchestrator | 2026-03-26 02:57:31.576924 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2026-03-26 02:57:31.576931 | orchestrator | 2026-03-26 02:57:31.576938 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-26 02:57:31.576946 | orchestrator | Thursday 26 March 2026 02:57:21 +0000 (0:00:00.758) 0:08:21.380 ******** 2026-03-26 02:57:31.576954 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 02:57:31.576960 | orchestrator | 2026-03-26 02:57:31.576965 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-26 02:57:31.576970 | orchestrator | Thursday 26 March 2026 02:57:23 +0000 (0:00:01.348) 0:08:22.728 ******** 2026-03-26 02:57:31.576974 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 02:57:31.576986 | orchestrator | 2026-03-26 02:57:31.576991 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-26 02:57:31.576995 | orchestrator | Thursday 26 March 2026 02:57:24 +0000 (0:00:01.484) 0:08:24.213 ******** 2026-03-26 02:57:31.577000 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:57:31.577005 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:57:31.577009 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:57:31.577014 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:57:31.577019 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:57:31.577023 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:57:31.577028 | orchestrator | 2026-03-26 02:57:31.577032 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-26 02:57:31.577037 | orchestrator | Thursday 26 March 2026 02:57:26 +0000 (0:00:01.323) 0:08:25.537 ******** 2026-03-26 02:57:31.577042 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:57:31.577046 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:57:31.577051 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:57:31.577055 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:57:31.577060 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:57:31.577064 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:57:31.577069 | orchestrator | 2026-03-26 02:57:31.577074 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-26 02:57:31.577078 | orchestrator | Thursday 26 March 2026 02:57:26 +0000 (0:00:00.783) 0:08:26.320 ******** 2026-03-26 02:57:31.577083 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:57:31.577088 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:57:31.577092 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:57:31.577097 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:57:31.577101 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:57:31.577106 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:57:31.577110 | orchestrator | 2026-03-26 02:57:31.577115 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-26 02:57:31.577120 | orchestrator | Thursday 26 March 2026 02:57:27 +0000 (0:00:00.990) 0:08:27.311 ******** 2026-03-26 02:57:31.577124 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:57:31.577129 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:57:31.577133 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:57:31.577138 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:57:31.577143 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:57:31.577147 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:57:31.577152 | orchestrator | 2026-03-26 02:57:31.577162 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-26 02:57:31.577167 | orchestrator | Thursday 26 March 2026 02:57:28 +0000 (0:00:00.724) 0:08:28.035 ******** 2026-03-26 02:57:31.577171 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:57:31.577176 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:57:31.577180 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:57:31.577185 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:57:31.577190 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:57:31.577194 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:57:31.577199 | orchestrator | 2026-03-26 02:57:31.577203 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-26 02:57:31.577208 | orchestrator | Thursday 26 March 2026 02:57:29 +0000 (0:00:01.419) 0:08:29.455 ******** 2026-03-26 02:57:31.577212 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:57:31.577217 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:57:31.577222 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:57:31.577226 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:57:31.577231 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:57:31.577235 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:57:31.577240 | orchestrator | 2026-03-26 02:57:31.577245 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-26 02:57:31.577249 | orchestrator | Thursday 26 March 2026 02:57:30 +0000 (0:00:00.658) 0:08:30.113 ******** 2026-03-26 02:57:31.577254 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:57:31.577264 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:57:31.577268 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:57:31.577273 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:57:31.577283 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:58:03.971928 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:58:03.972045 | orchestrator | 2026-03-26 02:58:03.972059 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-26 02:58:03.972070 | orchestrator | Thursday 26 March 2026 02:57:31 +0000 (0:00:00.965) 0:08:31.079 ******** 2026-03-26 02:58:03.972078 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:58:03.972087 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:58:03.972095 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:58:03.972103 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:58:03.972111 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:58:03.972119 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:58:03.972127 | orchestrator | 2026-03-26 02:58:03.972135 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-26 02:58:03.972144 | orchestrator | Thursday 26 March 2026 02:57:32 +0000 (0:00:01.045) 0:08:32.124 ******** 2026-03-26 02:58:03.972152 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:58:03.972160 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:58:03.972168 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:58:03.972175 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:58:03.972184 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:58:03.972198 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:58:03.972211 | orchestrator | 2026-03-26 02:58:03.972224 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-26 02:58:03.972237 | orchestrator | Thursday 26 March 2026 02:57:34 +0000 (0:00:01.424) 0:08:33.549 ******** 2026-03-26 02:58:03.972250 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:58:03.972264 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:58:03.972277 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:58:03.972290 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:58:03.972303 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:58:03.972316 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:58:03.972329 | orchestrator | 2026-03-26 02:58:03.972342 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-26 02:58:03.972382 | orchestrator | Thursday 26 March 2026 02:57:34 +0000 (0:00:00.668) 0:08:34.217 ******** 2026-03-26 02:58:03.972396 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:58:03.972410 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:58:03.972423 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:58:03.972438 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:58:03.972453 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:58:03.972467 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:58:03.972482 | orchestrator | 2026-03-26 02:58:03.972496 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-26 02:58:03.972509 | orchestrator | Thursday 26 March 2026 02:57:35 +0000 (0:00:00.985) 0:08:35.203 ******** 2026-03-26 02:58:03.972519 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:58:03.972528 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:58:03.972537 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:58:03.972547 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:58:03.972556 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:58:03.972564 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:58:03.972572 | orchestrator | 2026-03-26 02:58:03.972580 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-26 02:58:03.972590 | orchestrator | Thursday 26 March 2026 02:57:36 +0000 (0:00:00.647) 0:08:35.851 ******** 2026-03-26 02:58:03.972604 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:58:03.972617 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:58:03.972631 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:58:03.972644 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:58:03.972657 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:58:03.972696 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:58:03.972711 | orchestrator | 2026-03-26 02:58:03.972725 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-26 02:58:03.972738 | orchestrator | Thursday 26 March 2026 02:57:37 +0000 (0:00:00.949) 0:08:36.801 ******** 2026-03-26 02:58:03.972753 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:58:03.972766 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:58:03.972779 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:58:03.972793 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:58:03.972807 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:58:03.972820 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:58:03.972833 | orchestrator | 2026-03-26 02:58:03.972847 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-26 02:58:03.972880 | orchestrator | Thursday 26 March 2026 02:57:37 +0000 (0:00:00.606) 0:08:37.407 ******** 2026-03-26 02:58:03.972893 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:58:03.972907 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:58:03.972920 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:58:03.972933 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:58:03.972947 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:58:03.972960 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:58:03.972974 | orchestrator | 2026-03-26 02:58:03.972987 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-26 02:58:03.973001 | orchestrator | Thursday 26 March 2026 02:57:38 +0000 (0:00:00.962) 0:08:38.369 ******** 2026-03-26 02:58:03.973014 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:58:03.973029 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:58:03.973037 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:58:03.973046 | orchestrator | skipping: [testbed-node-0] 2026-03-26 02:58:03.973053 | orchestrator | skipping: [testbed-node-1] 2026-03-26 02:58:03.973061 | orchestrator | skipping: [testbed-node-2] 2026-03-26 02:58:03.973069 | orchestrator | 2026-03-26 02:58:03.973077 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-26 02:58:03.973086 | orchestrator | Thursday 26 March 2026 02:57:39 +0000 (0:00:00.674) 0:08:39.044 ******** 2026-03-26 02:58:03.973095 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:58:03.973109 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:58:03.973122 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:58:03.973135 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:58:03.973148 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:58:03.973161 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:58:03.973174 | orchestrator | 2026-03-26 02:58:03.973187 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-26 02:58:03.973200 | orchestrator | Thursday 26 March 2026 02:57:40 +0000 (0:00:00.913) 0:08:39.957 ******** 2026-03-26 02:58:03.973214 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:58:03.973226 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:58:03.973239 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:58:03.973253 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:58:03.973287 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:58:03.973302 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:58:03.973315 | orchestrator | 2026-03-26 02:58:03.973329 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-26 02:58:03.973342 | orchestrator | Thursday 26 March 2026 02:57:41 +0000 (0:00:00.682) 0:08:40.639 ******** 2026-03-26 02:58:03.973356 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:58:03.973369 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:58:03.973382 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:58:03.973451 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:58:03.973462 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:58:03.973470 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:58:03.973478 | orchestrator | 2026-03-26 02:58:03.973486 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-03-26 02:58:03.973495 | orchestrator | Thursday 26 March 2026 02:57:42 +0000 (0:00:01.459) 0:08:42.099 ******** 2026-03-26 02:58:03.973513 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-26 02:58:03.973521 | orchestrator | 2026-03-26 02:58:03.973529 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-03-26 02:58:03.973538 | orchestrator | Thursday 26 March 2026 02:57:46 +0000 (0:00:04.199) 0:08:46.299 ******** 2026-03-26 02:58:03.973546 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-26 02:58:03.973554 | orchestrator | 2026-03-26 02:58:03.973562 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-03-26 02:58:03.973570 | orchestrator | Thursday 26 March 2026 02:57:48 +0000 (0:00:01.847) 0:08:48.147 ******** 2026-03-26 02:58:03.973577 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:58:03.973585 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:58:03.973593 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:58:03.973602 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:58:03.973609 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:58:03.973617 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:58:03.973625 | orchestrator | 2026-03-26 02:58:03.973634 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-03-26 02:58:03.973641 | orchestrator | Thursday 26 March 2026 02:57:50 +0000 (0:00:01.475) 0:08:49.623 ******** 2026-03-26 02:58:03.973649 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:58:03.973657 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:58:03.973665 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:58:03.973673 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:58:03.973681 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:58:03.973689 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:58:03.973697 | orchestrator | 2026-03-26 02:58:03.973705 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-03-26 02:58:03.973713 | orchestrator | Thursday 26 March 2026 02:57:51 +0000 (0:00:01.315) 0:08:50.938 ******** 2026-03-26 02:58:03.973722 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 02:58:03.973731 | orchestrator | 2026-03-26 02:58:03.973739 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-03-26 02:58:03.973747 | orchestrator | Thursday 26 March 2026 02:57:52 +0000 (0:00:01.345) 0:08:52.284 ******** 2026-03-26 02:58:03.973755 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:58:03.973763 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:58:03.973771 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:58:03.973779 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:58:03.973787 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:58:03.973795 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:58:03.973803 | orchestrator | 2026-03-26 02:58:03.973811 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-03-26 02:58:03.973819 | orchestrator | Thursday 26 March 2026 02:57:54 +0000 (0:00:01.481) 0:08:53.765 ******** 2026-03-26 02:58:03.973827 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:58:03.973835 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:58:03.973843 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:58:03.973851 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:58:03.973878 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:58:03.973886 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:58:03.973894 | orchestrator | 2026-03-26 02:58:03.973902 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2026-03-26 02:58:03.973910 | orchestrator | Thursday 26 March 2026 02:57:57 +0000 (0:00:03.720) 0:08:57.486 ******** 2026-03-26 02:58:03.973923 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 02:58:03.973931 | orchestrator | 2026-03-26 02:58:03.973939 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2026-03-26 02:58:03.973952 | orchestrator | Thursday 26 March 2026 02:57:59 +0000 (0:00:01.429) 0:08:58.915 ******** 2026-03-26 02:58:03.973960 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:58:03.973968 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:58:03.973976 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:58:03.973984 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:58:03.973992 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:58:03.974000 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:58:03.974007 | orchestrator | 2026-03-26 02:58:03.974075 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2026-03-26 02:58:03.974086 | orchestrator | Thursday 26 March 2026 02:58:00 +0000 (0:00:00.769) 0:08:59.685 ******** 2026-03-26 02:58:03.974095 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:58:03.974103 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:58:03.974110 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:58:03.974118 | orchestrator | changed: [testbed-node-1] 2026-03-26 02:58:03.974126 | orchestrator | changed: [testbed-node-0] 2026-03-26 02:58:03.974134 | orchestrator | changed: [testbed-node-2] 2026-03-26 02:58:03.974142 | orchestrator | 2026-03-26 02:58:03.974150 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2026-03-26 02:58:03.974158 | orchestrator | Thursday 26 March 2026 02:58:02 +0000 (0:00:02.440) 0:09:02.125 ******** 2026-03-26 02:58:03.974166 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:58:03.974174 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:58:03.974182 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:58:03.974190 | orchestrator | ok: [testbed-node-0] 2026-03-26 02:58:03.974206 | orchestrator | ok: [testbed-node-1] 2026-03-26 02:58:33.011933 | orchestrator | ok: [testbed-node-2] 2026-03-26 02:58:33.012069 | orchestrator | 2026-03-26 02:58:33.012089 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2026-03-26 02:58:33.012103 | orchestrator | 2026-03-26 02:58:33.012115 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-26 02:58:33.012127 | orchestrator | Thursday 26 March 2026 02:58:03 +0000 (0:00:01.349) 0:09:03.475 ******** 2026-03-26 02:58:33.012139 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-26 02:58:33.012151 | orchestrator | 2026-03-26 02:58:33.012163 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-26 02:58:33.012174 | orchestrator | Thursday 26 March 2026 02:58:04 +0000 (0:00:00.584) 0:09:04.059 ******** 2026-03-26 02:58:33.012186 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-26 02:58:33.012197 | orchestrator | 2026-03-26 02:58:33.012208 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-26 02:58:33.012219 | orchestrator | Thursday 26 March 2026 02:58:05 +0000 (0:00:00.894) 0:09:04.954 ******** 2026-03-26 02:58:33.012230 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:58:33.012242 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:58:33.012253 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:58:33.012265 | orchestrator | 2026-03-26 02:58:33.012276 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-26 02:58:33.012287 | orchestrator | Thursday 26 March 2026 02:58:05 +0000 (0:00:00.345) 0:09:05.299 ******** 2026-03-26 02:58:33.012298 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:58:33.012310 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:58:33.012321 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:58:33.012332 | orchestrator | 2026-03-26 02:58:33.012343 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-26 02:58:33.012354 | orchestrator | Thursday 26 March 2026 02:58:06 +0000 (0:00:00.765) 0:09:06.065 ******** 2026-03-26 02:58:33.012365 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:58:33.012376 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:58:33.012387 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:58:33.012397 | orchestrator | 2026-03-26 02:58:33.012409 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-26 02:58:33.012444 | orchestrator | Thursday 26 March 2026 02:58:07 +0000 (0:00:00.668) 0:09:06.733 ******** 2026-03-26 02:58:33.012458 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:58:33.012472 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:58:33.012485 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:58:33.012497 | orchestrator | 2026-03-26 02:58:33.012510 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-26 02:58:33.012522 | orchestrator | Thursday 26 March 2026 02:58:08 +0000 (0:00:01.029) 0:09:07.763 ******** 2026-03-26 02:58:33.012535 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:58:33.012548 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:58:33.012560 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:58:33.012570 | orchestrator | 2026-03-26 02:58:33.012582 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-26 02:58:33.012592 | orchestrator | Thursday 26 March 2026 02:58:08 +0000 (0:00:00.348) 0:09:08.111 ******** 2026-03-26 02:58:33.012603 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:58:33.012614 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:58:33.012625 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:58:33.012636 | orchestrator | 2026-03-26 02:58:33.012647 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-26 02:58:33.012658 | orchestrator | Thursday 26 March 2026 02:58:08 +0000 (0:00:00.337) 0:09:08.449 ******** 2026-03-26 02:58:33.012669 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:58:33.012680 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:58:33.012691 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:58:33.012702 | orchestrator | 2026-03-26 02:58:33.012713 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-26 02:58:33.012724 | orchestrator | Thursday 26 March 2026 02:58:09 +0000 (0:00:00.331) 0:09:08.780 ******** 2026-03-26 02:58:33.012735 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:58:33.012746 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:58:33.012757 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:58:33.012768 | orchestrator | 2026-03-26 02:58:33.012779 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-26 02:58:33.012805 | orchestrator | Thursday 26 March 2026 02:58:10 +0000 (0:00:01.088) 0:09:09.869 ******** 2026-03-26 02:58:33.012816 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:58:33.012827 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:58:33.012838 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:58:33.012849 | orchestrator | 2026-03-26 02:58:33.012860 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-26 02:58:33.012871 | orchestrator | Thursday 26 March 2026 02:58:11 +0000 (0:00:00.723) 0:09:10.592 ******** 2026-03-26 02:58:33.012909 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:58:33.012923 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:58:33.012934 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:58:33.012945 | orchestrator | 2026-03-26 02:58:33.012957 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-26 02:58:33.012968 | orchestrator | Thursday 26 March 2026 02:58:11 +0000 (0:00:00.355) 0:09:10.947 ******** 2026-03-26 02:58:33.012979 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:58:33.012990 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:58:33.013004 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:58:33.013021 | orchestrator | 2026-03-26 02:58:33.013039 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-26 02:58:33.013057 | orchestrator | Thursday 26 March 2026 02:58:11 +0000 (0:00:00.362) 0:09:11.309 ******** 2026-03-26 02:58:33.013076 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:58:33.013093 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:58:33.013111 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:58:33.013129 | orchestrator | 2026-03-26 02:58:33.013148 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-26 02:58:33.013166 | orchestrator | Thursday 26 March 2026 02:58:12 +0000 (0:00:00.651) 0:09:11.961 ******** 2026-03-26 02:58:33.013228 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:58:33.013254 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:58:33.013272 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:58:33.013290 | orchestrator | 2026-03-26 02:58:33.013308 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-26 02:58:33.013325 | orchestrator | Thursday 26 March 2026 02:58:12 +0000 (0:00:00.372) 0:09:12.333 ******** 2026-03-26 02:58:33.013344 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:58:33.013362 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:58:33.013379 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:58:33.013396 | orchestrator | 2026-03-26 02:58:33.013414 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-26 02:58:33.013432 | orchestrator | Thursday 26 March 2026 02:58:13 +0000 (0:00:00.373) 0:09:12.707 ******** 2026-03-26 02:58:33.013451 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:58:33.013468 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:58:33.013487 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:58:33.013507 | orchestrator | 2026-03-26 02:58:33.013526 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-26 02:58:33.013544 | orchestrator | Thursday 26 March 2026 02:58:13 +0000 (0:00:00.353) 0:09:13.061 ******** 2026-03-26 02:58:33.013563 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:58:33.013582 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:58:33.013602 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:58:33.013620 | orchestrator | 2026-03-26 02:58:33.013637 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-26 02:58:33.013649 | orchestrator | Thursday 26 March 2026 02:58:14 +0000 (0:00:00.616) 0:09:13.678 ******** 2026-03-26 02:58:33.013660 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:58:33.013671 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:58:33.013682 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:58:33.013693 | orchestrator | 2026-03-26 02:58:33.013704 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-26 02:58:33.013715 | orchestrator | Thursday 26 March 2026 02:58:14 +0000 (0:00:00.338) 0:09:14.016 ******** 2026-03-26 02:58:33.013726 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:58:33.013737 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:58:33.013748 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:58:33.013759 | orchestrator | 2026-03-26 02:58:33.013770 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-26 02:58:33.013781 | orchestrator | Thursday 26 March 2026 02:58:14 +0000 (0:00:00.352) 0:09:14.369 ******** 2026-03-26 02:58:33.013792 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:58:33.013803 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:58:33.013814 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:58:33.013825 | orchestrator | 2026-03-26 02:58:33.013835 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-03-26 02:58:33.013847 | orchestrator | Thursday 26 March 2026 02:58:15 +0000 (0:00:00.853) 0:09:15.223 ******** 2026-03-26 02:58:33.013858 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:58:33.013869 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:58:33.013880 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2026-03-26 02:58:33.013933 | orchestrator | 2026-03-26 02:58:33.013945 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2026-03-26 02:58:33.013956 | orchestrator | Thursday 26 March 2026 02:58:16 +0000 (0:00:00.477) 0:09:15.701 ******** 2026-03-26 02:58:33.013967 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-26 02:58:33.013978 | orchestrator | 2026-03-26 02:58:33.013988 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2026-03-26 02:58:33.013999 | orchestrator | Thursday 26 March 2026 02:58:18 +0000 (0:00:02.073) 0:09:17.775 ******** 2026-03-26 02:58:33.014012 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2026-03-26 02:58:33.014109 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:58:33.014122 | orchestrator | 2026-03-26 02:58:33.014133 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2026-03-26 02:58:33.014144 | orchestrator | Thursday 26 March 2026 02:58:18 +0000 (0:00:00.244) 0:09:18.020 ******** 2026-03-26 02:58:33.014167 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-26 02:58:33.014187 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-26 02:58:33.014199 | orchestrator | 2026-03-26 02:58:33.014210 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2026-03-26 02:58:33.014221 | orchestrator | Thursday 26 March 2026 02:58:26 +0000 (0:00:08.350) 0:09:26.371 ******** 2026-03-26 02:58:33.014232 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-26 02:58:33.014243 | orchestrator | 2026-03-26 02:58:33.014254 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-03-26 02:58:33.014265 | orchestrator | Thursday 26 March 2026 02:58:31 +0000 (0:00:04.352) 0:09:30.723 ******** 2026-03-26 02:58:33.014276 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-26 02:58:33.014288 | orchestrator | 2026-03-26 02:58:33.014299 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-03-26 02:58:33.014310 | orchestrator | Thursday 26 March 2026 02:58:31 +0000 (0:00:00.718) 0:09:31.442 ******** 2026-03-26 02:58:33.014334 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-26 02:58:59.475217 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-26 02:58:59.475311 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-26 02:58:59.475321 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-03-26 02:58:59.475330 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-03-26 02:58:59.475337 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-03-26 02:58:59.475343 | orchestrator | 2026-03-26 02:58:59.475351 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-03-26 02:58:59.475357 | orchestrator | Thursday 26 March 2026 02:58:32 +0000 (0:00:01.072) 0:09:32.514 ******** 2026-03-26 02:58:59.475364 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-26 02:58:59.475370 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-26 02:58:59.475377 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-26 02:58:59.475383 | orchestrator | 2026-03-26 02:58:59.475390 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-03-26 02:58:59.475396 | orchestrator | Thursday 26 March 2026 02:58:35 +0000 (0:00:02.118) 0:09:34.633 ******** 2026-03-26 02:58:59.475402 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-26 02:58:59.475410 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-26 02:58:59.475416 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:58:59.475423 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-26 02:58:59.475430 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-26 02:58:59.475440 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:58:59.475451 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-26 02:58:59.475461 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-26 02:58:59.475492 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:58:59.475504 | orchestrator | 2026-03-26 02:58:59.475514 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-03-26 02:58:59.475524 | orchestrator | Thursday 26 March 2026 02:58:36 +0000 (0:00:01.512) 0:09:36.145 ******** 2026-03-26 02:58:59.475534 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:58:59.475544 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:58:59.475554 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:58:59.475565 | orchestrator | 2026-03-26 02:58:59.475576 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-03-26 02:58:59.475588 | orchestrator | Thursday 26 March 2026 02:58:39 +0000 (0:00:02.526) 0:09:38.672 ******** 2026-03-26 02:58:59.475598 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:58:59.475608 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:58:59.475619 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:58:59.475629 | orchestrator | 2026-03-26 02:58:59.475640 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-03-26 02:58:59.475650 | orchestrator | Thursday 26 March 2026 02:58:39 +0000 (0:00:00.370) 0:09:39.042 ******** 2026-03-26 02:58:59.475660 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-26 02:58:59.475671 | orchestrator | 2026-03-26 02:58:59.475681 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-03-26 02:58:59.475692 | orchestrator | Thursday 26 March 2026 02:58:40 +0000 (0:00:00.954) 0:09:39.997 ******** 2026-03-26 02:58:59.475702 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-26 02:58:59.475715 | orchestrator | 2026-03-26 02:58:59.475725 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-03-26 02:58:59.475736 | orchestrator | Thursday 26 March 2026 02:58:41 +0000 (0:00:00.669) 0:09:40.667 ******** 2026-03-26 02:58:59.475747 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:58:59.475757 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:58:59.475767 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:58:59.475777 | orchestrator | 2026-03-26 02:58:59.475788 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-03-26 02:58:59.475817 | orchestrator | Thursday 26 March 2026 02:58:42 +0000 (0:00:01.277) 0:09:41.944 ******** 2026-03-26 02:58:59.475829 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:58:59.475841 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:58:59.475851 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:58:59.475863 | orchestrator | 2026-03-26 02:58:59.475873 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-03-26 02:58:59.475883 | orchestrator | Thursday 26 March 2026 02:58:43 +0000 (0:00:01.458) 0:09:43.402 ******** 2026-03-26 02:58:59.475894 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:58:59.475904 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:58:59.475961 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:58:59.475971 | orchestrator | 2026-03-26 02:58:59.475981 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-03-26 02:58:59.475991 | orchestrator | Thursday 26 March 2026 02:58:45 +0000 (0:00:01.674) 0:09:45.077 ******** 2026-03-26 02:58:59.476002 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:58:59.476012 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:58:59.476022 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:58:59.476033 | orchestrator | 2026-03-26 02:58:59.476043 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-03-26 02:58:59.476053 | orchestrator | Thursday 26 March 2026 02:58:47 +0000 (0:00:01.827) 0:09:46.904 ******** 2026-03-26 02:58:59.476062 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:58:59.476070 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:58:59.476079 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:58:59.476088 | orchestrator | 2026-03-26 02:58:59.476097 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-26 02:58:59.476118 | orchestrator | Thursday 26 March 2026 02:58:48 +0000 (0:00:01.559) 0:09:48.464 ******** 2026-03-26 02:58:59.476129 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:58:59.476139 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:58:59.476170 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:58:59.476181 | orchestrator | 2026-03-26 02:58:59.476192 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-03-26 02:58:59.476202 | orchestrator | Thursday 26 March 2026 02:58:49 +0000 (0:00:00.669) 0:09:49.133 ******** 2026-03-26 02:58:59.476212 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-26 02:58:59.476223 | orchestrator | 2026-03-26 02:58:59.476233 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-03-26 02:58:59.476332 | orchestrator | Thursday 26 March 2026 02:58:50 +0000 (0:00:00.864) 0:09:49.998 ******** 2026-03-26 02:58:59.476344 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:58:59.476355 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:58:59.476365 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:58:59.476376 | orchestrator | 2026-03-26 02:58:59.476385 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-03-26 02:58:59.476394 | orchestrator | Thursday 26 March 2026 02:58:50 +0000 (0:00:00.370) 0:09:50.368 ******** 2026-03-26 02:58:59.476404 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:58:59.476415 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:58:59.476425 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:58:59.476435 | orchestrator | 2026-03-26 02:58:59.476445 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-03-26 02:58:59.476456 | orchestrator | Thursday 26 March 2026 02:58:51 +0000 (0:00:01.114) 0:09:51.482 ******** 2026-03-26 02:58:59.476468 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-26 02:58:59.476479 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-26 02:58:59.476490 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-26 02:58:59.476501 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:58:59.476513 | orchestrator | 2026-03-26 02:58:59.476525 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-03-26 02:58:59.476535 | orchestrator | Thursday 26 March 2026 02:58:52 +0000 (0:00:00.959) 0:09:52.442 ******** 2026-03-26 02:58:59.476546 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:58:59.476556 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:58:59.476566 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:58:59.476577 | orchestrator | 2026-03-26 02:58:59.476589 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-03-26 02:58:59.476599 | orchestrator | 2026-03-26 02:58:59.476609 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-26 02:58:59.476619 | orchestrator | Thursday 26 March 2026 02:58:53 +0000 (0:00:00.926) 0:09:53.368 ******** 2026-03-26 02:58:59.476629 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-26 02:58:59.476640 | orchestrator | 2026-03-26 02:58:59.476650 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-26 02:58:59.476660 | orchestrator | Thursday 26 March 2026 02:58:54 +0000 (0:00:00.558) 0:09:53.926 ******** 2026-03-26 02:58:59.476669 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-26 02:58:59.476679 | orchestrator | 2026-03-26 02:58:59.476689 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-26 02:58:59.476698 | orchestrator | Thursday 26 March 2026 02:58:55 +0000 (0:00:00.896) 0:09:54.823 ******** 2026-03-26 02:58:59.476709 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:58:59.476718 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:58:59.476727 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:58:59.476749 | orchestrator | 2026-03-26 02:58:59.476759 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-26 02:58:59.476769 | orchestrator | Thursday 26 March 2026 02:58:55 +0000 (0:00:00.342) 0:09:55.166 ******** 2026-03-26 02:58:59.476778 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:58:59.476788 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:58:59.476798 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:58:59.476809 | orchestrator | 2026-03-26 02:58:59.476819 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-26 02:58:59.476829 | orchestrator | Thursday 26 March 2026 02:58:56 +0000 (0:00:00.682) 0:09:55.848 ******** 2026-03-26 02:58:59.476839 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:58:59.476860 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:58:59.476872 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:58:59.476882 | orchestrator | 2026-03-26 02:58:59.476893 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-26 02:58:59.476903 | orchestrator | Thursday 26 March 2026 02:58:57 +0000 (0:00:01.071) 0:09:56.920 ******** 2026-03-26 02:58:59.476935 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:58:59.476945 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:58:59.476955 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:58:59.476965 | orchestrator | 2026-03-26 02:58:59.476975 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-26 02:58:59.476985 | orchestrator | Thursday 26 March 2026 02:58:58 +0000 (0:00:00.699) 0:09:57.620 ******** 2026-03-26 02:58:59.476996 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:58:59.477008 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:58:59.477019 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:58:59.477028 | orchestrator | 2026-03-26 02:58:59.477038 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-26 02:58:59.477048 | orchestrator | Thursday 26 March 2026 02:58:58 +0000 (0:00:00.347) 0:09:57.968 ******** 2026-03-26 02:58:59.477059 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:58:59.477071 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:58:59.477083 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:58:59.477094 | orchestrator | 2026-03-26 02:58:59.477105 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-26 02:58:59.477115 | orchestrator | Thursday 26 March 2026 02:58:58 +0000 (0:00:00.337) 0:09:58.305 ******** 2026-03-26 02:58:59.477126 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:58:59.477136 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:58:59.477146 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:58:59.477157 | orchestrator | 2026-03-26 02:58:59.477185 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-26 02:59:21.449879 | orchestrator | Thursday 26 March 2026 02:58:59 +0000 (0:00:00.669) 0:09:58.975 ******** 2026-03-26 02:59:21.450090 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:59:21.450110 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:59:21.450121 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:59:21.450131 | orchestrator | 2026-03-26 02:59:21.450142 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-26 02:59:21.450152 | orchestrator | Thursday 26 March 2026 02:59:00 +0000 (0:00:00.714) 0:09:59.689 ******** 2026-03-26 02:59:21.450162 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:59:21.450172 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:59:21.450182 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:59:21.450192 | orchestrator | 2026-03-26 02:59:21.450202 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-26 02:59:21.450212 | orchestrator | Thursday 26 March 2026 02:59:00 +0000 (0:00:00.704) 0:10:00.394 ******** 2026-03-26 02:59:21.450222 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:59:21.450233 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:59:21.450243 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:59:21.450253 | orchestrator | 2026-03-26 02:59:21.450263 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-26 02:59:21.450298 | orchestrator | Thursday 26 March 2026 02:59:01 +0000 (0:00:00.354) 0:10:00.748 ******** 2026-03-26 02:59:21.450309 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:59:21.450319 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:59:21.450329 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:59:21.450339 | orchestrator | 2026-03-26 02:59:21.450349 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-26 02:59:21.450359 | orchestrator | Thursday 26 March 2026 02:59:01 +0000 (0:00:00.652) 0:10:01.400 ******** 2026-03-26 02:59:21.450371 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:59:21.450382 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:59:21.450393 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:59:21.450403 | orchestrator | 2026-03-26 02:59:21.450415 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-26 02:59:21.450425 | orchestrator | Thursday 26 March 2026 02:59:02 +0000 (0:00:00.372) 0:10:01.773 ******** 2026-03-26 02:59:21.450436 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:59:21.450446 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:59:21.450455 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:59:21.450465 | orchestrator | 2026-03-26 02:59:21.450475 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-26 02:59:21.450484 | orchestrator | Thursday 26 March 2026 02:59:02 +0000 (0:00:00.370) 0:10:02.143 ******** 2026-03-26 02:59:21.450494 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:59:21.450504 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:59:21.450514 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:59:21.450523 | orchestrator | 2026-03-26 02:59:21.450533 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-26 02:59:21.450543 | orchestrator | Thursday 26 March 2026 02:59:03 +0000 (0:00:00.378) 0:10:02.522 ******** 2026-03-26 02:59:21.450553 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:59:21.450563 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:59:21.450572 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:59:21.450582 | orchestrator | 2026-03-26 02:59:21.450592 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-26 02:59:21.450602 | orchestrator | Thursday 26 March 2026 02:59:03 +0000 (0:00:00.658) 0:10:03.181 ******** 2026-03-26 02:59:21.450621 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:59:21.450632 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:59:21.450642 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:59:21.450652 | orchestrator | 2026-03-26 02:59:21.450662 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-26 02:59:21.450672 | orchestrator | Thursday 26 March 2026 02:59:03 +0000 (0:00:00.328) 0:10:03.510 ******** 2026-03-26 02:59:21.450682 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:59:21.450692 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:59:21.450701 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:59:21.450711 | orchestrator | 2026-03-26 02:59:21.450721 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-26 02:59:21.450731 | orchestrator | Thursday 26 March 2026 02:59:04 +0000 (0:00:00.358) 0:10:03.868 ******** 2026-03-26 02:59:21.450741 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:59:21.450751 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:59:21.450761 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:59:21.450770 | orchestrator | 2026-03-26 02:59:21.450794 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-26 02:59:21.450804 | orchestrator | Thursday 26 March 2026 02:59:04 +0000 (0:00:00.338) 0:10:04.206 ******** 2026-03-26 02:59:21.450814 | orchestrator | ok: [testbed-node-3] 2026-03-26 02:59:21.450824 | orchestrator | ok: [testbed-node-4] 2026-03-26 02:59:21.450834 | orchestrator | ok: [testbed-node-5] 2026-03-26 02:59:21.450843 | orchestrator | 2026-03-26 02:59:21.450853 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-03-26 02:59:21.450863 | orchestrator | Thursday 26 March 2026 02:59:05 +0000 (0:00:00.940) 0:10:05.147 ******** 2026-03-26 02:59:21.450883 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-26 02:59:21.450894 | orchestrator | 2026-03-26 02:59:21.450904 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-03-26 02:59:21.450914 | orchestrator | Thursday 26 March 2026 02:59:06 +0000 (0:00:00.846) 0:10:05.993 ******** 2026-03-26 02:59:21.450986 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-26 02:59:21.450999 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-26 02:59:21.451009 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-26 02:59:21.451020 | orchestrator | 2026-03-26 02:59:21.451030 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-03-26 02:59:21.451040 | orchestrator | Thursday 26 March 2026 02:59:08 +0000 (0:00:02.156) 0:10:08.150 ******** 2026-03-26 02:59:21.451050 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-26 02:59:21.451060 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-26 02:59:21.451070 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:59:21.451098 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-26 02:59:21.451109 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-26 02:59:21.451119 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:59:21.451129 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-26 02:59:21.451139 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-26 02:59:21.451148 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:59:21.451158 | orchestrator | 2026-03-26 02:59:21.451168 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-03-26 02:59:21.451178 | orchestrator | Thursday 26 March 2026 02:59:09 +0000 (0:00:01.227) 0:10:09.378 ******** 2026-03-26 02:59:21.451188 | orchestrator | skipping: [testbed-node-3] 2026-03-26 02:59:21.451198 | orchestrator | skipping: [testbed-node-4] 2026-03-26 02:59:21.451208 | orchestrator | skipping: [testbed-node-5] 2026-03-26 02:59:21.451218 | orchestrator | 2026-03-26 02:59:21.451228 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-03-26 02:59:21.451238 | orchestrator | Thursday 26 March 2026 02:59:10 +0000 (0:00:00.373) 0:10:09.751 ******** 2026-03-26 02:59:21.451248 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-26 02:59:21.451258 | orchestrator | 2026-03-26 02:59:21.451268 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-03-26 02:59:21.451277 | orchestrator | Thursday 26 March 2026 02:59:11 +0000 (0:00:00.841) 0:10:10.593 ******** 2026-03-26 02:59:21.451288 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-26 02:59:21.451300 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-26 02:59:21.451311 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-26 02:59:21.451320 | orchestrator | 2026-03-26 02:59:21.451330 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-03-26 02:59:21.451340 | orchestrator | Thursday 26 March 2026 02:59:11 +0000 (0:00:00.815) 0:10:11.408 ******** 2026-03-26 02:59:21.451350 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-26 02:59:21.451360 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-26 02:59:21.451370 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-26 02:59:21.451380 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-26 02:59:21.451415 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-26 02:59:21.451426 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-26 02:59:21.451436 | orchestrator | 2026-03-26 02:59:21.451446 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-03-26 02:59:21.451456 | orchestrator | Thursday 26 March 2026 02:59:16 +0000 (0:00:04.560) 0:10:15.969 ******** 2026-03-26 02:59:21.451466 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-26 02:59:21.451476 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-26 02:59:21.451486 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-26 02:59:21.451495 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-26 02:59:21.451505 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-26 02:59:21.451531 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-26 02:59:21.451542 | orchestrator | 2026-03-26 02:59:21.451552 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-03-26 02:59:21.451562 | orchestrator | Thursday 26 March 2026 02:59:18 +0000 (0:00:02.437) 0:10:18.407 ******** 2026-03-26 02:59:21.451572 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-26 02:59:21.451582 | orchestrator | changed: [testbed-node-3] 2026-03-26 02:59:21.451592 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-26 02:59:21.451602 | orchestrator | changed: [testbed-node-4] 2026-03-26 02:59:21.451612 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-26 02:59:21.451622 | orchestrator | changed: [testbed-node-5] 2026-03-26 02:59:21.451632 | orchestrator | 2026-03-26 02:59:21.451642 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-03-26 02:59:21.451652 | orchestrator | Thursday 26 March 2026 02:59:20 +0000 (0:00:01.617) 0:10:20.024 ******** 2026-03-26 02:59:21.451662 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-03-26 02:59:21.451671 | orchestrator | 2026-03-26 02:59:21.451681 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-03-26 02:59:21.451691 | orchestrator | Thursday 26 March 2026 02:59:20 +0000 (0:00:00.267) 0:10:20.292 ******** 2026-03-26 02:59:21.451701 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-26 02:59:21.451711 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-26 02:59:21.451727 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-26 03:00:07.028389 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-26 03:00:07.028480 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-26 03:00:07.028491 | orchestrator | skipping: [testbed-node-3] 2026-03-26 03:00:07.028501 | orchestrator | 2026-03-26 03:00:07.028506 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-03-26 03:00:07.028513 | orchestrator | Thursday 26 March 2026 02:59:21 +0000 (0:00:00.660) 0:10:20.953 ******** 2026-03-26 03:00:07.028519 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-26 03:00:07.028524 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-26 03:00:07.028530 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-26 03:00:07.028553 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-26 03:00:07.028558 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-26 03:00:07.028563 | orchestrator | skipping: [testbed-node-3] 2026-03-26 03:00:07.028568 | orchestrator | 2026-03-26 03:00:07.028573 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-03-26 03:00:07.028578 | orchestrator | Thursday 26 March 2026 02:59:22 +0000 (0:00:00.660) 0:10:21.614 ******** 2026-03-26 03:00:07.028583 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-26 03:00:07.028590 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-26 03:00:07.028594 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-26 03:00:07.028602 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-26 03:00:07.028609 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-26 03:00:07.028616 | orchestrator | 2026-03-26 03:00:07.028637 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-03-26 03:00:07.028653 | orchestrator | Thursday 26 March 2026 02:59:53 +0000 (0:00:31.288) 0:10:52.902 ******** 2026-03-26 03:00:07.028660 | orchestrator | skipping: [testbed-node-3] 2026-03-26 03:00:07.028667 | orchestrator | skipping: [testbed-node-4] 2026-03-26 03:00:07.028674 | orchestrator | skipping: [testbed-node-5] 2026-03-26 03:00:07.028683 | orchestrator | 2026-03-26 03:00:07.028689 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-03-26 03:00:07.028697 | orchestrator | Thursday 26 March 2026 02:59:53 +0000 (0:00:00.365) 0:10:53.268 ******** 2026-03-26 03:00:07.028705 | orchestrator | skipping: [testbed-node-3] 2026-03-26 03:00:07.028712 | orchestrator | skipping: [testbed-node-4] 2026-03-26 03:00:07.028719 | orchestrator | skipping: [testbed-node-5] 2026-03-26 03:00:07.028727 | orchestrator | 2026-03-26 03:00:07.028748 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-03-26 03:00:07.028756 | orchestrator | Thursday 26 March 2026 02:59:54 +0000 (0:00:00.610) 0:10:53.878 ******** 2026-03-26 03:00:07.028766 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-26 03:00:07.028771 | orchestrator | 2026-03-26 03:00:07.028776 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-03-26 03:00:07.028781 | orchestrator | Thursday 26 March 2026 02:59:55 +0000 (0:00:00.648) 0:10:54.526 ******** 2026-03-26 03:00:07.028785 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-26 03:00:07.028790 | orchestrator | 2026-03-26 03:00:07.028796 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-03-26 03:00:07.028804 | orchestrator | Thursday 26 March 2026 02:59:55 +0000 (0:00:00.915) 0:10:55.442 ******** 2026-03-26 03:00:07.028812 | orchestrator | changed: [testbed-node-3] 2026-03-26 03:00:07.028820 | orchestrator | changed: [testbed-node-4] 2026-03-26 03:00:07.028827 | orchestrator | changed: [testbed-node-5] 2026-03-26 03:00:07.028835 | orchestrator | 2026-03-26 03:00:07.028842 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-03-26 03:00:07.028850 | orchestrator | Thursday 26 March 2026 02:59:57 +0000 (0:00:01.331) 0:10:56.773 ******** 2026-03-26 03:00:07.028866 | orchestrator | changed: [testbed-node-3] 2026-03-26 03:00:07.028873 | orchestrator | changed: [testbed-node-4] 2026-03-26 03:00:07.028882 | orchestrator | changed: [testbed-node-5] 2026-03-26 03:00:07.028889 | orchestrator | 2026-03-26 03:00:07.028897 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-03-26 03:00:07.028905 | orchestrator | Thursday 26 March 2026 02:59:58 +0000 (0:00:01.234) 0:10:58.008 ******** 2026-03-26 03:00:07.028912 | orchestrator | changed: [testbed-node-3] 2026-03-26 03:00:07.028933 | orchestrator | changed: [testbed-node-4] 2026-03-26 03:00:07.028938 | orchestrator | changed: [testbed-node-5] 2026-03-26 03:00:07.028943 | orchestrator | 2026-03-26 03:00:07.028948 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-03-26 03:00:07.028952 | orchestrator | Thursday 26 March 2026 03:00:00 +0000 (0:00:01.817) 0:10:59.825 ******** 2026-03-26 03:00:07.028957 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-26 03:00:07.028962 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-26 03:00:07.029003 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-26 03:00:07.029008 | orchestrator | 2026-03-26 03:00:07.029013 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-26 03:00:07.029017 | orchestrator | Thursday 26 March 2026 03:00:03 +0000 (0:00:02.837) 0:11:02.663 ******** 2026-03-26 03:00:07.029022 | orchestrator | skipping: [testbed-node-3] 2026-03-26 03:00:07.029027 | orchestrator | skipping: [testbed-node-4] 2026-03-26 03:00:07.029032 | orchestrator | skipping: [testbed-node-5] 2026-03-26 03:00:07.029036 | orchestrator | 2026-03-26 03:00:07.029041 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-03-26 03:00:07.029046 | orchestrator | Thursday 26 March 2026 03:00:03 +0000 (0:00:00.373) 0:11:03.036 ******** 2026-03-26 03:00:07.029050 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-26 03:00:07.029055 | orchestrator | 2026-03-26 03:00:07.029060 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-03-26 03:00:07.029065 | orchestrator | Thursday 26 March 2026 03:00:04 +0000 (0:00:00.907) 0:11:03.943 ******** 2026-03-26 03:00:07.029069 | orchestrator | ok: [testbed-node-3] 2026-03-26 03:00:07.029075 | orchestrator | ok: [testbed-node-4] 2026-03-26 03:00:07.029080 | orchestrator | ok: [testbed-node-5] 2026-03-26 03:00:07.029084 | orchestrator | 2026-03-26 03:00:07.029089 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-03-26 03:00:07.029094 | orchestrator | Thursday 26 March 2026 03:00:04 +0000 (0:00:00.422) 0:11:04.365 ******** 2026-03-26 03:00:07.029098 | orchestrator | skipping: [testbed-node-3] 2026-03-26 03:00:07.029103 | orchestrator | skipping: [testbed-node-4] 2026-03-26 03:00:07.029108 | orchestrator | skipping: [testbed-node-5] 2026-03-26 03:00:07.029112 | orchestrator | 2026-03-26 03:00:07.029117 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-03-26 03:00:07.029122 | orchestrator | Thursday 26 March 2026 03:00:05 +0000 (0:00:00.521) 0:11:04.887 ******** 2026-03-26 03:00:07.029127 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-26 03:00:07.029132 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-26 03:00:07.029137 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-26 03:00:07.029142 | orchestrator | skipping: [testbed-node-3] 2026-03-26 03:00:07.029146 | orchestrator | 2026-03-26 03:00:07.029151 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-03-26 03:00:07.029156 | orchestrator | Thursday 26 March 2026 03:00:06 +0000 (0:00:01.352) 0:11:06.239 ******** 2026-03-26 03:00:07.029161 | orchestrator | ok: [testbed-node-3] 2026-03-26 03:00:07.029165 | orchestrator | ok: [testbed-node-4] 2026-03-26 03:00:07.029176 | orchestrator | ok: [testbed-node-5] 2026-03-26 03:00:07.029181 | orchestrator | 2026-03-26 03:00:07.029186 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-26 03:00:07.029191 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2026-03-26 03:00:07.029201 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2026-03-26 03:00:07.029206 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2026-03-26 03:00:07.029211 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2026-03-26 03:00:07.029216 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2026-03-26 03:00:07.029220 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2026-03-26 03:00:07.029225 | orchestrator | 2026-03-26 03:00:07.029230 | orchestrator | 2026-03-26 03:00:07.029234 | orchestrator | 2026-03-26 03:00:07.029239 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-26 03:00:07.029244 | orchestrator | Thursday 26 March 2026 03:00:06 +0000 (0:00:00.276) 0:11:06.515 ******** 2026-03-26 03:00:07.029249 | orchestrator | =============================================================================== 2026-03-26 03:00:07.029253 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 64.17s 2026-03-26 03:00:07.029258 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 41.01s 2026-03-26 03:00:07.029263 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 31.29s 2026-03-26 03:00:07.029268 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 24.28s 2026-03-26 03:00:07.029272 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 21.85s 2026-03-26 03:00:07.029281 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 14.92s 2026-03-26 03:00:07.672425 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.70s 2026-03-26 03:00:07.672505 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 10.63s 2026-03-26 03:00:07.672514 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 8.99s 2026-03-26 03:00:07.672521 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 8.35s 2026-03-26 03:00:07.672528 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.37s 2026-03-26 03:00:07.672534 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 6.36s 2026-03-26 03:00:07.672540 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 5.32s 2026-03-26 03:00:07.672546 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.56s 2026-03-26 03:00:07.672552 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 4.35s 2026-03-26 03:00:07.672559 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 4.20s 2026-03-26 03:00:07.672565 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 3.72s 2026-03-26 03:00:07.672571 | orchestrator | ceph-container-common : Get ceph version -------------------------------- 3.47s 2026-03-26 03:00:07.672577 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.43s 2026-03-26 03:00:07.672583 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 3.13s 2026-03-26 03:00:10.332889 | orchestrator | 2026-03-26 03:00:10 | INFO  | Task 62df255a-8406-4b80-a8bf-efaf3055d2ba (ceph-pools) was prepared for execution. 2026-03-26 03:00:10.333028 | orchestrator | 2026-03-26 03:00:10 | INFO  | It takes a moment until task 62df255a-8406-4b80-a8bf-efaf3055d2ba (ceph-pools) has been started and output is visible here. 2026-03-26 03:00:25.708237 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-26 03:00:25.708344 | orchestrator | 2.16.14 2026-03-26 03:00:25.708360 | orchestrator | 2026-03-26 03:00:25.708371 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2026-03-26 03:00:25.708382 | orchestrator | 2026-03-26 03:00:25.708391 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-26 03:00:25.708401 | orchestrator | Thursday 26 March 2026 03:00:15 +0000 (0:00:00.682) 0:00:00.682 ******** 2026-03-26 03:00:25.708422 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-26 03:00:25.708432 | orchestrator | 2026-03-26 03:00:25.708441 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-26 03:00:25.708450 | orchestrator | Thursday 26 March 2026 03:00:16 +0000 (0:00:00.729) 0:00:01.411 ******** 2026-03-26 03:00:25.708459 | orchestrator | ok: [testbed-node-3] 2026-03-26 03:00:25.708469 | orchestrator | ok: [testbed-node-4] 2026-03-26 03:00:25.708478 | orchestrator | ok: [testbed-node-5] 2026-03-26 03:00:25.708487 | orchestrator | 2026-03-26 03:00:25.708496 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-26 03:00:25.708504 | orchestrator | Thursday 26 March 2026 03:00:16 +0000 (0:00:00.672) 0:00:02.083 ******** 2026-03-26 03:00:25.708513 | orchestrator | ok: [testbed-node-3] 2026-03-26 03:00:25.708522 | orchestrator | ok: [testbed-node-4] 2026-03-26 03:00:25.708531 | orchestrator | ok: [testbed-node-5] 2026-03-26 03:00:25.708540 | orchestrator | 2026-03-26 03:00:25.708549 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-26 03:00:25.708558 | orchestrator | Thursday 26 March 2026 03:00:17 +0000 (0:00:00.346) 0:00:02.430 ******** 2026-03-26 03:00:25.708566 | orchestrator | ok: [testbed-node-3] 2026-03-26 03:00:25.708575 | orchestrator | ok: [testbed-node-4] 2026-03-26 03:00:25.708584 | orchestrator | ok: [testbed-node-5] 2026-03-26 03:00:25.708593 | orchestrator | 2026-03-26 03:00:25.708617 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-26 03:00:25.708626 | orchestrator | Thursday 26 March 2026 03:00:18 +0000 (0:00:00.953) 0:00:03.384 ******** 2026-03-26 03:00:25.708635 | orchestrator | ok: [testbed-node-3] 2026-03-26 03:00:25.708644 | orchestrator | ok: [testbed-node-4] 2026-03-26 03:00:25.708653 | orchestrator | ok: [testbed-node-5] 2026-03-26 03:00:25.708670 | orchestrator | 2026-03-26 03:00:25.708679 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-26 03:00:25.708688 | orchestrator | Thursday 26 March 2026 03:00:18 +0000 (0:00:00.372) 0:00:03.756 ******** 2026-03-26 03:00:25.708697 | orchestrator | ok: [testbed-node-3] 2026-03-26 03:00:25.708706 | orchestrator | ok: [testbed-node-4] 2026-03-26 03:00:25.708714 | orchestrator | ok: [testbed-node-5] 2026-03-26 03:00:25.708723 | orchestrator | 2026-03-26 03:00:25.708732 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-26 03:00:25.708741 | orchestrator | Thursday 26 March 2026 03:00:18 +0000 (0:00:00.309) 0:00:04.066 ******** 2026-03-26 03:00:25.708750 | orchestrator | ok: [testbed-node-3] 2026-03-26 03:00:25.708766 | orchestrator | ok: [testbed-node-4] 2026-03-26 03:00:25.708781 | orchestrator | ok: [testbed-node-5] 2026-03-26 03:00:25.708795 | orchestrator | 2026-03-26 03:00:25.708811 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-26 03:00:25.708826 | orchestrator | Thursday 26 March 2026 03:00:19 +0000 (0:00:00.344) 0:00:04.410 ******** 2026-03-26 03:00:25.708841 | orchestrator | skipping: [testbed-node-3] 2026-03-26 03:00:25.708857 | orchestrator | skipping: [testbed-node-4] 2026-03-26 03:00:25.708872 | orchestrator | skipping: [testbed-node-5] 2026-03-26 03:00:25.708889 | orchestrator | 2026-03-26 03:00:25.708905 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-26 03:00:25.708945 | orchestrator | Thursday 26 March 2026 03:00:19 +0000 (0:00:00.607) 0:00:05.018 ******** 2026-03-26 03:00:25.708962 | orchestrator | ok: [testbed-node-3] 2026-03-26 03:00:25.708977 | orchestrator | ok: [testbed-node-4] 2026-03-26 03:00:25.709024 | orchestrator | ok: [testbed-node-5] 2026-03-26 03:00:25.709036 | orchestrator | 2026-03-26 03:00:25.709046 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-26 03:00:25.709057 | orchestrator | Thursday 26 March 2026 03:00:19 +0000 (0:00:00.320) 0:00:05.339 ******** 2026-03-26 03:00:25.709067 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-26 03:00:25.709077 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-26 03:00:25.709087 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-26 03:00:25.709097 | orchestrator | 2026-03-26 03:00:25.709107 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-26 03:00:25.709117 | orchestrator | Thursday 26 March 2026 03:00:20 +0000 (0:00:00.675) 0:00:06.014 ******** 2026-03-26 03:00:25.709127 | orchestrator | ok: [testbed-node-3] 2026-03-26 03:00:25.709148 | orchestrator | ok: [testbed-node-4] 2026-03-26 03:00:25.709157 | orchestrator | ok: [testbed-node-5] 2026-03-26 03:00:25.709166 | orchestrator | 2026-03-26 03:00:25.709175 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-26 03:00:25.709184 | orchestrator | Thursday 26 March 2026 03:00:21 +0000 (0:00:00.504) 0:00:06.518 ******** 2026-03-26 03:00:25.709192 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-26 03:00:25.709201 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-26 03:00:25.709210 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-26 03:00:25.709218 | orchestrator | 2026-03-26 03:00:25.709227 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-26 03:00:25.709236 | orchestrator | Thursday 26 March 2026 03:00:23 +0000 (0:00:02.274) 0:00:08.793 ******** 2026-03-26 03:00:25.709246 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-26 03:00:25.709255 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-26 03:00:25.709264 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-26 03:00:25.709273 | orchestrator | skipping: [testbed-node-3] 2026-03-26 03:00:25.709282 | orchestrator | 2026-03-26 03:00:25.709307 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-26 03:00:25.709316 | orchestrator | Thursday 26 March 2026 03:00:24 +0000 (0:00:00.695) 0:00:09.488 ******** 2026-03-26 03:00:25.709327 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-26 03:00:25.709338 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-26 03:00:25.709347 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-26 03:00:25.709357 | orchestrator | skipping: [testbed-node-3] 2026-03-26 03:00:25.709365 | orchestrator | 2026-03-26 03:00:25.709374 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-26 03:00:25.709383 | orchestrator | Thursday 26 March 2026 03:00:25 +0000 (0:00:01.166) 0:00:10.654 ******** 2026-03-26 03:00:25.709401 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-26 03:00:25.709424 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-26 03:00:25.709434 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-26 03:00:25.709443 | orchestrator | skipping: [testbed-node-3] 2026-03-26 03:00:25.709452 | orchestrator | 2026-03-26 03:00:25.709461 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-26 03:00:25.709470 | orchestrator | Thursday 26 March 2026 03:00:25 +0000 (0:00:00.178) 0:00:10.833 ******** 2026-03-26 03:00:25.709481 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'c1b85917b265', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-26 03:00:22.131422', 'end': '2026-03-26 03:00:22.186955', 'delta': '0:00:00.055533', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['c1b85917b265'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-26 03:00:25.709494 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '1fb5a820b9f6', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-26 03:00:22.709658', 'end': '2026-03-26 03:00:22.757960', 'delta': '0:00:00.048302', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['1fb5a820b9f6'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-26 03:00:25.709511 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '2a382ea60872', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-26 03:00:23.269066', 'end': '2026-03-26 03:00:23.309660', 'delta': '0:00:00.040594', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['2a382ea60872'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-26 03:00:32.897201 | orchestrator | 2026-03-26 03:00:32.897301 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-26 03:00:32.897316 | orchestrator | Thursday 26 March 2026 03:00:25 +0000 (0:00:00.199) 0:00:11.032 ******** 2026-03-26 03:00:32.897348 | orchestrator | ok: [testbed-node-3] 2026-03-26 03:00:32.897359 | orchestrator | ok: [testbed-node-4] 2026-03-26 03:00:32.897368 | orchestrator | ok: [testbed-node-5] 2026-03-26 03:00:32.897377 | orchestrator | 2026-03-26 03:00:32.897386 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-26 03:00:32.897395 | orchestrator | Thursday 26 March 2026 03:00:26 +0000 (0:00:00.455) 0:00:11.488 ******** 2026-03-26 03:00:32.897405 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-03-26 03:00:32.897414 | orchestrator | 2026-03-26 03:00:32.897436 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-26 03:00:32.897445 | orchestrator | Thursday 26 March 2026 03:00:27 +0000 (0:00:01.685) 0:00:13.174 ******** 2026-03-26 03:00:32.897455 | orchestrator | skipping: [testbed-node-3] 2026-03-26 03:00:32.897466 | orchestrator | skipping: [testbed-node-4] 2026-03-26 03:00:32.897482 | orchestrator | skipping: [testbed-node-5] 2026-03-26 03:00:32.897496 | orchestrator | 2026-03-26 03:00:32.897561 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-26 03:00:32.897577 | orchestrator | Thursday 26 March 2026 03:00:28 +0000 (0:00:00.362) 0:00:13.536 ******** 2026-03-26 03:00:32.897592 | orchestrator | skipping: [testbed-node-3] 2026-03-26 03:00:32.897606 | orchestrator | skipping: [testbed-node-4] 2026-03-26 03:00:32.897619 | orchestrator | skipping: [testbed-node-5] 2026-03-26 03:00:32.897634 | orchestrator | 2026-03-26 03:00:32.897647 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-26 03:00:32.897661 | orchestrator | Thursday 26 March 2026 03:00:28 +0000 (0:00:00.741) 0:00:14.277 ******** 2026-03-26 03:00:32.897676 | orchestrator | skipping: [testbed-node-3] 2026-03-26 03:00:32.897691 | orchestrator | skipping: [testbed-node-4] 2026-03-26 03:00:32.897706 | orchestrator | skipping: [testbed-node-5] 2026-03-26 03:00:32.897720 | orchestrator | 2026-03-26 03:00:32.897735 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-26 03:00:32.897750 | orchestrator | Thursday 26 March 2026 03:00:29 +0000 (0:00:00.345) 0:00:14.623 ******** 2026-03-26 03:00:32.897765 | orchestrator | ok: [testbed-node-3] 2026-03-26 03:00:32.897781 | orchestrator | 2026-03-26 03:00:32.897800 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-26 03:00:32.897816 | orchestrator | Thursday 26 March 2026 03:00:29 +0000 (0:00:00.162) 0:00:14.785 ******** 2026-03-26 03:00:32.897832 | orchestrator | skipping: [testbed-node-3] 2026-03-26 03:00:32.897848 | orchestrator | 2026-03-26 03:00:32.897864 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-26 03:00:32.897879 | orchestrator | Thursday 26 March 2026 03:00:29 +0000 (0:00:00.263) 0:00:15.048 ******** 2026-03-26 03:00:32.897893 | orchestrator | skipping: [testbed-node-3] 2026-03-26 03:00:32.897910 | orchestrator | skipping: [testbed-node-4] 2026-03-26 03:00:32.897925 | orchestrator | skipping: [testbed-node-5] 2026-03-26 03:00:32.897940 | orchestrator | 2026-03-26 03:00:32.897951 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-26 03:00:32.897960 | orchestrator | Thursday 26 March 2026 03:00:30 +0000 (0:00:00.378) 0:00:15.427 ******** 2026-03-26 03:00:32.897969 | orchestrator | skipping: [testbed-node-3] 2026-03-26 03:00:32.897978 | orchestrator | skipping: [testbed-node-4] 2026-03-26 03:00:32.898011 | orchestrator | skipping: [testbed-node-5] 2026-03-26 03:00:32.898076 | orchestrator | 2026-03-26 03:00:32.898085 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-26 03:00:32.898094 | orchestrator | Thursday 26 March 2026 03:00:30 +0000 (0:00:00.601) 0:00:16.028 ******** 2026-03-26 03:00:32.898103 | orchestrator | skipping: [testbed-node-3] 2026-03-26 03:00:32.898112 | orchestrator | skipping: [testbed-node-4] 2026-03-26 03:00:32.898121 | orchestrator | skipping: [testbed-node-5] 2026-03-26 03:00:32.898130 | orchestrator | 2026-03-26 03:00:32.898171 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-26 03:00:32.898182 | orchestrator | Thursday 26 March 2026 03:00:31 +0000 (0:00:00.353) 0:00:16.381 ******** 2026-03-26 03:00:32.898213 | orchestrator | skipping: [testbed-node-3] 2026-03-26 03:00:32.898228 | orchestrator | skipping: [testbed-node-4] 2026-03-26 03:00:32.898242 | orchestrator | skipping: [testbed-node-5] 2026-03-26 03:00:32.898258 | orchestrator | 2026-03-26 03:00:32.898272 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-26 03:00:32.898288 | orchestrator | Thursday 26 March 2026 03:00:31 +0000 (0:00:00.370) 0:00:16.752 ******** 2026-03-26 03:00:32.898303 | orchestrator | skipping: [testbed-node-3] 2026-03-26 03:00:32.898319 | orchestrator | skipping: [testbed-node-4] 2026-03-26 03:00:32.898334 | orchestrator | skipping: [testbed-node-5] 2026-03-26 03:00:32.898348 | orchestrator | 2026-03-26 03:00:32.898361 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-26 03:00:32.898370 | orchestrator | Thursday 26 March 2026 03:00:31 +0000 (0:00:00.344) 0:00:17.096 ******** 2026-03-26 03:00:32.898379 | orchestrator | skipping: [testbed-node-3] 2026-03-26 03:00:32.898390 | orchestrator | skipping: [testbed-node-4] 2026-03-26 03:00:32.898405 | orchestrator | skipping: [testbed-node-5] 2026-03-26 03:00:32.898420 | orchestrator | 2026-03-26 03:00:32.898434 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-26 03:00:32.898449 | orchestrator | Thursday 26 March 2026 03:00:32 +0000 (0:00:00.537) 0:00:17.633 ******** 2026-03-26 03:00:32.898463 | orchestrator | skipping: [testbed-node-3] 2026-03-26 03:00:32.898477 | orchestrator | skipping: [testbed-node-4] 2026-03-26 03:00:32.898491 | orchestrator | skipping: [testbed-node-5] 2026-03-26 03:00:32.898505 | orchestrator | 2026-03-26 03:00:32.898520 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-26 03:00:32.898535 | orchestrator | Thursday 26 March 2026 03:00:32 +0000 (0:00:00.358) 0:00:17.992 ******** 2026-03-26 03:00:32.898581 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--93e8c9a2--b6ff--5fe0--a79e--2922336c3e0a-osd--block--93e8c9a2--b6ff--5fe0--a79e--2922336c3e0a', 'dm-uuid-LVM-NfuOn4R5AkCZoZBaGfCwjgSejX4qlSlby5xuVgNQ7T0MWashc4xC7nHJ3VUNBCRS'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-26 03:00:32.898611 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e2623153--bc41--510f--8884--ef957bb96082-osd--block--e2623153--bc41--510f--8884--ef957bb96082', 'dm-uuid-LVM-8hKVl461SF70Ai5uMDmNdT5BP20Vvkg8AxHs2aTbdloCZd5zRhurro2iqvFnFzRY'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-26 03:00:32.898623 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-26 03:00:32.898637 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-26 03:00:32.898654 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-26 03:00:32.898680 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-26 03:00:32.898694 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-26 03:00:32.898708 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-26 03:00:32.898722 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-26 03:00:32.898747 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-26 03:00:33.016886 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce600cf2-62c4-44aa-8248-5535335c6519', 'scsi-SQEMU_QEMU_HARDDISK_ce600cf2-62c4-44aa-8248-5535335c6519'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce600cf2-62c4-44aa-8248-5535335c6519-part1', 'scsi-SQEMU_QEMU_HARDDISK_ce600cf2-62c4-44aa-8248-5535335c6519-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce600cf2-62c4-44aa-8248-5535335c6519-part14', 'scsi-SQEMU_QEMU_HARDDISK_ce600cf2-62c4-44aa-8248-5535335c6519-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce600cf2-62c4-44aa-8248-5535335c6519-part15', 'scsi-SQEMU_QEMU_HARDDISK_ce600cf2-62c4-44aa-8248-5535335c6519-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce600cf2-62c4-44aa-8248-5535335c6519-part16', 'scsi-SQEMU_QEMU_HARDDISK_ce600cf2-62c4-44aa-8248-5535335c6519-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-26 03:00:33.017190 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--93e8c9a2--b6ff--5fe0--a79e--2922336c3e0a-osd--block--93e8c9a2--b6ff--5fe0--a79e--2922336c3e0a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-2XKfyD-kvYx-XaUk-IA1D-OFMu-auWL-FeQHCw', 'scsi-0QEMU_QEMU_HARDDISK_d11e4e4a-db1d-44df-8da9-5de7e993dd80', 'scsi-SQEMU_QEMU_HARDDISK_d11e4e4a-db1d-44df-8da9-5de7e993dd80'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-26 03:00:33.017222 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a652979e--9f40--503a--bbc8--6de5e605991e-osd--block--a652979e--9f40--503a--bbc8--6de5e605991e', 'dm-uuid-LVM-86WEu6duX2Pejl3asW6viK3fsh4aqvqg2h2U7SLeR6PGwru1xY81U9rrCs8siESG'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-26 03:00:33.017256 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--e2623153--bc41--510f--8884--ef957bb96082-osd--block--e2623153--bc41--510f--8884--ef957bb96082'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-dxNnp3-HdCF-97hz-w17k-bHEu-opcA-g4y34j', 'scsi-0QEMU_QEMU_HARDDISK_863ba5d2-7e2f-4393-95a6-83543745d331', 'scsi-SQEMU_QEMU_HARDDISK_863ba5d2-7e2f-4393-95a6-83543745d331'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-26 03:00:33.017278 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b5eee7c3--8883--5bbe--be5a--75726e822543-osd--block--b5eee7c3--8883--5bbe--be5a--75726e822543', 'dm-uuid-LVM-O1aEkSX5V2TgXKGnqX2peNd9dQhi04NAZJyEqlgfRLjtJKN8JwRgDI1ZPO4R3wgt'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-26 03:00:33.017292 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2dae49df-17cb-48b5-9940-ec5e7ec792d8', 'scsi-SQEMU_QEMU_HARDDISK_2dae49df-17cb-48b5-9940-ec5e7ec792d8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-26 03:00:33.017312 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-26 03:00:33.017328 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-26-01-38-13-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-26 03:00:33.017339 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-26 03:00:33.017352 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-26 03:00:33.017363 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-26 03:00:33.017381 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-26 03:00:33.239175 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-26 03:00:33.239278 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-26 03:00:33.239291 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-26 03:00:33.239328 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48d73a84-835d-480a-92c3-3edf7ed142ea', 'scsi-SQEMU_QEMU_HARDDISK_48d73a84-835d-480a-92c3-3edf7ed142ea'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48d73a84-835d-480a-92c3-3edf7ed142ea-part1', 'scsi-SQEMU_QEMU_HARDDISK_48d73a84-835d-480a-92c3-3edf7ed142ea-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48d73a84-835d-480a-92c3-3edf7ed142ea-part14', 'scsi-SQEMU_QEMU_HARDDISK_48d73a84-835d-480a-92c3-3edf7ed142ea-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48d73a84-835d-480a-92c3-3edf7ed142ea-part15', 'scsi-SQEMU_QEMU_HARDDISK_48d73a84-835d-480a-92c3-3edf7ed142ea-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48d73a84-835d-480a-92c3-3edf7ed142ea-part16', 'scsi-SQEMU_QEMU_HARDDISK_48d73a84-835d-480a-92c3-3edf7ed142ea-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-26 03:00:33.239362 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--a652979e--9f40--503a--bbc8--6de5e605991e-osd--block--a652979e--9f40--503a--bbc8--6de5e605991e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-eoBjP8-dDdJ-3FQm-pH7P-5B72-c1L3-mABWfX', 'scsi-0QEMU_QEMU_HARDDISK_7db5f133-fe7b-42a4-ad57-b076dc1856ab', 'scsi-SQEMU_QEMU_HARDDISK_7db5f133-fe7b-42a4-ad57-b076dc1856ab'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-26 03:00:33.239384 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--b5eee7c3--8883--5bbe--be5a--75726e822543-osd--block--b5eee7c3--8883--5bbe--be5a--75726e822543'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Oy69b4-OcVV-F2KD-vi5G-C8ns-n3Cu-1PhYTB', 'scsi-0QEMU_QEMU_HARDDISK_a52ec37c-b4ea-4f83-9b16-3c0f6ce85263', 'scsi-SQEMU_QEMU_HARDDISK_a52ec37c-b4ea-4f83-9b16-3c0f6ce85263'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-26 03:00:33.239396 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7e352b46-e023-45cf-8a88-51cc46240a44', 'scsi-SQEMU_QEMU_HARDDISK_7e352b46-e023-45cf-8a88-51cc46240a44'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-26 03:00:33.239415 | orchestrator | skipping: [testbed-node-3] 2026-03-26 03:00:33.239503 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-26-01-38-09-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-26 03:00:33.239518 | orchestrator | skipping: [testbed-node-4] 2026-03-26 03:00:33.239528 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--83c4def8--4703--5f7c--9549--7666ff9f2b66-osd--block--83c4def8--4703--5f7c--9549--7666ff9f2b66', 'dm-uuid-LVM-DoNgv1c108dy4eu1pvS7TOCWbuA3UXv0A6zrFIA863mhHtIp5pUFeDHxhomhuceD'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-26 03:00:33.239540 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1fd8de68--da37--5e01--9bf2--5a04fcdcd771-osd--block--1fd8de68--da37--5e01--9bf2--5a04fcdcd771', 'dm-uuid-LVM-Q7trkX6T9bQrenPM1EuezeEWG2QB7ffx0bNZRnQ3R81VwJTdPWktYtRAGSsXVFlp'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-26 03:00:33.239551 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-26 03:00:33.239570 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-26 03:00:33.455546 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-26 03:00:33.455658 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-26 03:00:33.455730 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-26 03:00:33.455741 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-26 03:00:33.455749 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-26 03:00:33.455756 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-26 03:00:33.455791 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4fa924fa-33d9-43ce-b208-159d6f6ab539', 'scsi-SQEMU_QEMU_HARDDISK_4fa924fa-33d9-43ce-b208-159d6f6ab539'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4fa924fa-33d9-43ce-b208-159d6f6ab539-part1', 'scsi-SQEMU_QEMU_HARDDISK_4fa924fa-33d9-43ce-b208-159d6f6ab539-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4fa924fa-33d9-43ce-b208-159d6f6ab539-part14', 'scsi-SQEMU_QEMU_HARDDISK_4fa924fa-33d9-43ce-b208-159d6f6ab539-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4fa924fa-33d9-43ce-b208-159d6f6ab539-part15', 'scsi-SQEMU_QEMU_HARDDISK_4fa924fa-33d9-43ce-b208-159d6f6ab539-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4fa924fa-33d9-43ce-b208-159d6f6ab539-part16', 'scsi-SQEMU_QEMU_HARDDISK_4fa924fa-33d9-43ce-b208-159d6f6ab539-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-26 03:00:33.455810 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--83c4def8--4703--5f7c--9549--7666ff9f2b66-osd--block--83c4def8--4703--5f7c--9549--7666ff9f2b66'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-FriUOI-gUEr-kmP0-nYC7-MoO0-ng3W-Ej90o7', 'scsi-0QEMU_QEMU_HARDDISK_943c088c-5b56-4173-ab64-ec81e1cc816d', 'scsi-SQEMU_QEMU_HARDDISK_943c088c-5b56-4173-ab64-ec81e1cc816d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-26 03:00:33.455820 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--1fd8de68--da37--5e01--9bf2--5a04fcdcd771-osd--block--1fd8de68--da37--5e01--9bf2--5a04fcdcd771'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-xgZSV6-0wfE-zGZo-XmXe-xuiN-RWM0-U4VPgB', 'scsi-0QEMU_QEMU_HARDDISK_47760649-09e9-4ed8-8303-e5ee473a8102', 'scsi-SQEMU_QEMU_HARDDISK_47760649-09e9-4ed8-8303-e5ee473a8102'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-26 03:00:33.455829 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8ddd7966-84e6-4951-8a08-7b4fb4af2bd2', 'scsi-SQEMU_QEMU_HARDDISK_8ddd7966-84e6-4951-8a08-7b4fb4af2bd2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-26 03:00:33.455837 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-26-01-38-15-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-26 03:00:33.455846 | orchestrator | skipping: [testbed-node-5] 2026-03-26 03:00:33.455855 | orchestrator | 2026-03-26 03:00:33.455863 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-26 03:00:33.455873 | orchestrator | Thursday 26 March 2026 03:00:33 +0000 (0:00:00.693) 0:00:18.685 ******** 2026-03-26 03:00:33.455888 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--93e8c9a2--b6ff--5fe0--a79e--2922336c3e0a-osd--block--93e8c9a2--b6ff--5fe0--a79e--2922336c3e0a', 'dm-uuid-LVM-NfuOn4R5AkCZoZBaGfCwjgSejX4qlSlby5xuVgNQ7T0MWashc4xC7nHJ3VUNBCRS'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-26 03:00:33.607119 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e2623153--bc41--510f--8884--ef957bb96082-osd--block--e2623153--bc41--510f--8884--ef957bb96082', 'dm-uuid-LVM-8hKVl461SF70Ai5uMDmNdT5BP20Vvkg8AxHs2aTbdloCZd5zRhurro2iqvFnFzRY'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-26 03:00:33.607187 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-26 03:00:33.607193 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-26 03:00:33.607198 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-26 03:00:33.607202 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-26 03:00:33.607206 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-26 03:00:33.607238 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-26 03:00:33.607243 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a652979e--9f40--503a--bbc8--6de5e605991e-osd--block--a652979e--9f40--503a--bbc8--6de5e605991e', 'dm-uuid-LVM-86WEu6duX2Pejl3asW6viK3fsh4aqvqg2h2U7SLeR6PGwru1xY81U9rrCs8siESG'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-26 03:00:33.607247 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-26 03:00:33.607251 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b5eee7c3--8883--5bbe--be5a--75726e822543-osd--block--b5eee7c3--8883--5bbe--be5a--75726e822543', 'dm-uuid-LVM-O1aEkSX5V2TgXKGnqX2peNd9dQhi04NAZJyEqlgfRLjtJKN8JwRgDI1ZPO4R3wgt'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-26 03:00:33.607255 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-26 03:00:33.607260 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-26 03:00:33.607310 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce600cf2-62c4-44aa-8248-5535335c6519', 'scsi-SQEMU_QEMU_HARDDISK_ce600cf2-62c4-44aa-8248-5535335c6519'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce600cf2-62c4-44aa-8248-5535335c6519-part1', 'scsi-SQEMU_QEMU_HARDDISK_ce600cf2-62c4-44aa-8248-5535335c6519-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce600cf2-62c4-44aa-8248-5535335c6519-part14', 'scsi-SQEMU_QEMU_HARDDISK_ce600cf2-62c4-44aa-8248-5535335c6519-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce600cf2-62c4-44aa-8248-5535335c6519-part15', 'scsi-SQEMU_QEMU_HARDDISK_ce600cf2-62c4-44aa-8248-5535335c6519-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce600cf2-62c4-44aa-8248-5535335c6519-part16', 'scsi-SQEMU_QEMU_HARDDISK_ce600cf2-62c4-44aa-8248-5535335c6519-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-26 03:00:33.716902 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-26 03:00:33.717073 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--93e8c9a2--b6ff--5fe0--a79e--2922336c3e0a-osd--block--93e8c9a2--b6ff--5fe0--a79e--2922336c3e0a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-2XKfyD-kvYx-XaUk-IA1D-OFMu-auWL-FeQHCw', 'scsi-0QEMU_QEMU_HARDDISK_d11e4e4a-db1d-44df-8da9-5de7e993dd80', 'scsi-SQEMU_QEMU_HARDDISK_d11e4e4a-db1d-44df-8da9-5de7e993dd80'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-26 03:00:33.717146 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-26 03:00:33.717168 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--e2623153--bc41--510f--8884--ef957bb96082-osd--block--e2623153--bc41--510f--8884--ef957bb96082'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-dxNnp3-HdCF-97hz-w17k-bHEu-opcA-g4y34j', 'scsi-0QEMU_QEMU_HARDDISK_863ba5d2-7e2f-4393-95a6-83543745d331', 'scsi-SQEMU_QEMU_HARDDISK_863ba5d2-7e2f-4393-95a6-83543745d331'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-26 03:00:33.717187 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2dae49df-17cb-48b5-9940-ec5e7ec792d8', 'scsi-SQEMU_QEMU_HARDDISK_2dae49df-17cb-48b5-9940-ec5e7ec792d8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-26 03:00:33.717229 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-26 03:00:33.717265 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-26-01-38-13-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-26 03:00:33.717284 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-26 03:00:33.717331 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-26 03:00:33.717351 | orchestrator | skipping: [testbed-node-3] 2026-03-26 03:00:33.717370 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-26 03:00:33.717388 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-26 03:00:33.717423 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48d73a84-835d-480a-92c3-3edf7ed142ea', 'scsi-SQEMU_QEMU_HARDDISK_48d73a84-835d-480a-92c3-3edf7ed142ea'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48d73a84-835d-480a-92c3-3edf7ed142ea-part1', 'scsi-SQEMU_QEMU_HARDDISK_48d73a84-835d-480a-92c3-3edf7ed142ea-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48d73a84-835d-480a-92c3-3edf7ed142ea-part14', 'scsi-SQEMU_QEMU_HARDDISK_48d73a84-835d-480a-92c3-3edf7ed142ea-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48d73a84-835d-480a-92c3-3edf7ed142ea-part15', 'scsi-SQEMU_QEMU_HARDDISK_48d73a84-835d-480a-92c3-3edf7ed142ea-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48d73a84-835d-480a-92c3-3edf7ed142ea-part16', 'scsi-SQEMU_QEMU_HARDDISK_48d73a84-835d-480a-92c3-3edf7ed142ea-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-26 03:00:33.857095 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--a652979e--9f40--503a--bbc8--6de5e605991e-osd--block--a652979e--9f40--503a--bbc8--6de5e605991e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-eoBjP8-dDdJ-3FQm-pH7P-5B72-c1L3-mABWfX', 'scsi-0QEMU_QEMU_HARDDISK_7db5f133-fe7b-42a4-ad57-b076dc1856ab', 'scsi-SQEMU_QEMU_HARDDISK_7db5f133-fe7b-42a4-ad57-b076dc1856ab'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-26 03:00:33.857195 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--b5eee7c3--8883--5bbe--be5a--75726e822543-osd--block--b5eee7c3--8883--5bbe--be5a--75726e822543'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Oy69b4-OcVV-F2KD-vi5G-C8ns-n3Cu-1PhYTB', 'scsi-0QEMU_QEMU_HARDDISK_a52ec37c-b4ea-4f83-9b16-3c0f6ce85263', 'scsi-SQEMU_QEMU_HARDDISK_a52ec37c-b4ea-4f83-9b16-3c0f6ce85263'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-26 03:00:33.857210 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--83c4def8--4703--5f7c--9549--7666ff9f2b66-osd--block--83c4def8--4703--5f7c--9549--7666ff9f2b66', 'dm-uuid-LVM-DoNgv1c108dy4eu1pvS7TOCWbuA3UXv0A6zrFIA863mhHtIp5pUFeDHxhomhuceD'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-26 03:00:33.857220 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7e352b46-e023-45cf-8a88-51cc46240a44', 'scsi-SQEMU_QEMU_HARDDISK_7e352b46-e023-45cf-8a88-51cc46240a44'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-26 03:00:33.857272 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1fd8de68--da37--5e01--9bf2--5a04fcdcd771-osd--block--1fd8de68--da37--5e01--9bf2--5a04fcdcd771', 'dm-uuid-LVM-Q7trkX6T9bQrenPM1EuezeEWG2QB7ffx0bNZRnQ3R81VwJTdPWktYtRAGSsXVFlp'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-26 03:00:33.857289 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-26-01-38-09-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-26 03:00:33.857298 | orchestrator | skipping: [testbed-node-4] 2026-03-26 03:00:33.857307 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-26 03:00:33.857316 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-26 03:00:33.857323 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-26 03:00:33.857331 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-26 03:00:33.857344 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-26 03:00:33.857362 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-26 03:00:36.306743 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-26 03:00:36.306851 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-26 03:00:36.306869 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4fa924fa-33d9-43ce-b208-159d6f6ab539', 'scsi-SQEMU_QEMU_HARDDISK_4fa924fa-33d9-43ce-b208-159d6f6ab539'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4fa924fa-33d9-43ce-b208-159d6f6ab539-part1', 'scsi-SQEMU_QEMU_HARDDISK_4fa924fa-33d9-43ce-b208-159d6f6ab539-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4fa924fa-33d9-43ce-b208-159d6f6ab539-part14', 'scsi-SQEMU_QEMU_HARDDISK_4fa924fa-33d9-43ce-b208-159d6f6ab539-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4fa924fa-33d9-43ce-b208-159d6f6ab539-part15', 'scsi-SQEMU_QEMU_HARDDISK_4fa924fa-33d9-43ce-b208-159d6f6ab539-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4fa924fa-33d9-43ce-b208-159d6f6ab539-part16', 'scsi-SQEMU_QEMU_HARDDISK_4fa924fa-33d9-43ce-b208-159d6f6ab539-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-26 03:00:36.306978 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--83c4def8--4703--5f7c--9549--7666ff9f2b66-osd--block--83c4def8--4703--5f7c--9549--7666ff9f2b66'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-FriUOI-gUEr-kmP0-nYC7-MoO0-ng3W-Ej90o7', 'scsi-0QEMU_QEMU_HARDDISK_943c088c-5b56-4173-ab64-ec81e1cc816d', 'scsi-SQEMU_QEMU_HARDDISK_943c088c-5b56-4173-ab64-ec81e1cc816d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-26 03:00:36.307019 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--1fd8de68--da37--5e01--9bf2--5a04fcdcd771-osd--block--1fd8de68--da37--5e01--9bf2--5a04fcdcd771'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-xgZSV6-0wfE-zGZo-XmXe-xuiN-RWM0-U4VPgB', 'scsi-0QEMU_QEMU_HARDDISK_47760649-09e9-4ed8-8303-e5ee473a8102', 'scsi-SQEMU_QEMU_HARDDISK_47760649-09e9-4ed8-8303-e5ee473a8102'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-26 03:00:36.307031 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8ddd7966-84e6-4951-8a08-7b4fb4af2bd2', 'scsi-SQEMU_QEMU_HARDDISK_8ddd7966-84e6-4951-8a08-7b4fb4af2bd2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-26 03:00:36.307041 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-26-01-38-15-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-26 03:00:36.307059 | orchestrator | skipping: [testbed-node-5] 2026-03-26 03:00:36.307070 | orchestrator | 2026-03-26 03:00:36.307080 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-26 03:00:36.307091 | orchestrator | Thursday 26 March 2026 03:00:33 +0000 (0:00:00.638) 0:00:19.323 ******** 2026-03-26 03:00:36.307100 | orchestrator | ok: [testbed-node-3] 2026-03-26 03:00:36.307110 | orchestrator | ok: [testbed-node-4] 2026-03-26 03:00:36.307118 | orchestrator | ok: [testbed-node-5] 2026-03-26 03:00:36.307127 | orchestrator | 2026-03-26 03:00:36.307136 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-26 03:00:36.307145 | orchestrator | Thursday 26 March 2026 03:00:34 +0000 (0:00:01.004) 0:00:20.328 ******** 2026-03-26 03:00:36.307154 | orchestrator | ok: [testbed-node-3] 2026-03-26 03:00:36.307162 | orchestrator | ok: [testbed-node-4] 2026-03-26 03:00:36.307171 | orchestrator | ok: [testbed-node-5] 2026-03-26 03:00:36.307180 | orchestrator | 2026-03-26 03:00:36.307189 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-26 03:00:36.307199 | orchestrator | Thursday 26 March 2026 03:00:35 +0000 (0:00:00.323) 0:00:20.652 ******** 2026-03-26 03:00:36.307210 | orchestrator | ok: [testbed-node-3] 2026-03-26 03:00:36.307227 | orchestrator | ok: [testbed-node-4] 2026-03-26 03:00:36.307242 | orchestrator | ok: [testbed-node-5] 2026-03-26 03:00:36.307257 | orchestrator | 2026-03-26 03:00:36.307278 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-26 03:00:36.307295 | orchestrator | Thursday 26 March 2026 03:00:35 +0000 (0:00:00.673) 0:00:21.326 ******** 2026-03-26 03:00:36.307310 | orchestrator | skipping: [testbed-node-3] 2026-03-26 03:00:36.307324 | orchestrator | skipping: [testbed-node-4] 2026-03-26 03:00:36.307339 | orchestrator | skipping: [testbed-node-5] 2026-03-26 03:00:36.307353 | orchestrator | 2026-03-26 03:00:36.307368 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-26 03:00:36.307396 | orchestrator | Thursday 26 March 2026 03:00:36 +0000 (0:00:00.312) 0:00:21.638 ******** 2026-03-26 03:01:33.450712 | orchestrator | skipping: [testbed-node-3] 2026-03-26 03:01:33.450843 | orchestrator | skipping: [testbed-node-4] 2026-03-26 03:01:33.450861 | orchestrator | skipping: [testbed-node-5] 2026-03-26 03:01:33.450876 | orchestrator | 2026-03-26 03:01:33.450889 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-26 03:01:33.450901 | orchestrator | Thursday 26 March 2026 03:00:37 +0000 (0:00:00.703) 0:00:22.342 ******** 2026-03-26 03:01:33.450913 | orchestrator | skipping: [testbed-node-3] 2026-03-26 03:01:33.450924 | orchestrator | skipping: [testbed-node-4] 2026-03-26 03:01:33.450935 | orchestrator | skipping: [testbed-node-5] 2026-03-26 03:01:33.450946 | orchestrator | 2026-03-26 03:01:33.450958 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-26 03:01:33.450969 | orchestrator | Thursday 26 March 2026 03:00:37 +0000 (0:00:00.372) 0:00:22.715 ******** 2026-03-26 03:01:33.450980 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-03-26 03:01:33.450992 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-03-26 03:01:33.451003 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-03-26 03:01:33.451014 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-03-26 03:01:33.451025 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-03-26 03:01:33.451036 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-03-26 03:01:33.451091 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-03-26 03:01:33.451129 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-03-26 03:01:33.451141 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-03-26 03:01:33.451153 | orchestrator | 2026-03-26 03:01:33.451164 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-26 03:01:33.451175 | orchestrator | Thursday 26 March 2026 03:00:38 +0000 (0:00:01.134) 0:00:23.849 ******** 2026-03-26 03:01:33.451187 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-26 03:01:33.451198 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-26 03:01:33.451210 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-26 03:01:33.451222 | orchestrator | skipping: [testbed-node-3] 2026-03-26 03:01:33.451234 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-26 03:01:33.451246 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-26 03:01:33.451259 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-26 03:01:33.451272 | orchestrator | skipping: [testbed-node-4] 2026-03-26 03:01:33.451286 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-26 03:01:33.451298 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-26 03:01:33.451311 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-26 03:01:33.451324 | orchestrator | skipping: [testbed-node-5] 2026-03-26 03:01:33.451336 | orchestrator | 2026-03-26 03:01:33.451349 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-26 03:01:33.451362 | orchestrator | Thursday 26 March 2026 03:00:38 +0000 (0:00:00.419) 0:00:24.269 ******** 2026-03-26 03:01:33.451376 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-26 03:01:33.451389 | orchestrator | 2026-03-26 03:01:33.451402 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-26 03:01:33.451417 | orchestrator | Thursday 26 March 2026 03:00:39 +0000 (0:00:00.802) 0:00:25.072 ******** 2026-03-26 03:01:33.451429 | orchestrator | skipping: [testbed-node-3] 2026-03-26 03:01:33.451443 | orchestrator | skipping: [testbed-node-4] 2026-03-26 03:01:33.451455 | orchestrator | skipping: [testbed-node-5] 2026-03-26 03:01:33.451467 | orchestrator | 2026-03-26 03:01:33.451479 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-26 03:01:33.451492 | orchestrator | Thursday 26 March 2026 03:00:40 +0000 (0:00:00.331) 0:00:25.404 ******** 2026-03-26 03:01:33.451504 | orchestrator | skipping: [testbed-node-3] 2026-03-26 03:01:33.451517 | orchestrator | skipping: [testbed-node-4] 2026-03-26 03:01:33.451528 | orchestrator | skipping: [testbed-node-5] 2026-03-26 03:01:33.451540 | orchestrator | 2026-03-26 03:01:33.451552 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-26 03:01:33.451565 | orchestrator | Thursday 26 March 2026 03:00:40 +0000 (0:00:00.329) 0:00:25.734 ******** 2026-03-26 03:01:33.451577 | orchestrator | skipping: [testbed-node-3] 2026-03-26 03:01:33.451590 | orchestrator | skipping: [testbed-node-4] 2026-03-26 03:01:33.451602 | orchestrator | skipping: [testbed-node-5] 2026-03-26 03:01:33.451615 | orchestrator | 2026-03-26 03:01:33.451627 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-26 03:01:33.451640 | orchestrator | Thursday 26 March 2026 03:00:40 +0000 (0:00:00.595) 0:00:26.329 ******** 2026-03-26 03:01:33.451651 | orchestrator | ok: [testbed-node-3] 2026-03-26 03:01:33.451662 | orchestrator | ok: [testbed-node-4] 2026-03-26 03:01:33.451674 | orchestrator | ok: [testbed-node-5] 2026-03-26 03:01:33.451684 | orchestrator | 2026-03-26 03:01:33.451695 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-26 03:01:33.451706 | orchestrator | Thursday 26 March 2026 03:00:41 +0000 (0:00:00.440) 0:00:26.769 ******** 2026-03-26 03:01:33.451717 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-26 03:01:33.451737 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-26 03:01:33.451763 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-26 03:01:33.451774 | orchestrator | skipping: [testbed-node-3] 2026-03-26 03:01:33.451786 | orchestrator | 2026-03-26 03:01:33.451796 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-26 03:01:33.451807 | orchestrator | Thursday 26 March 2026 03:00:41 +0000 (0:00:00.407) 0:00:27.176 ******** 2026-03-26 03:01:33.451818 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-26 03:01:33.451830 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-26 03:01:33.451858 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-26 03:01:33.451870 | orchestrator | skipping: [testbed-node-3] 2026-03-26 03:01:33.451881 | orchestrator | 2026-03-26 03:01:33.451892 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-26 03:01:33.451903 | orchestrator | Thursday 26 March 2026 03:00:42 +0000 (0:00:00.375) 0:00:27.552 ******** 2026-03-26 03:01:33.451913 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-26 03:01:33.451924 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-26 03:01:33.451935 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-26 03:01:33.451946 | orchestrator | skipping: [testbed-node-3] 2026-03-26 03:01:33.451957 | orchestrator | 2026-03-26 03:01:33.451968 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-26 03:01:33.451978 | orchestrator | Thursday 26 March 2026 03:00:42 +0000 (0:00:00.426) 0:00:27.978 ******** 2026-03-26 03:01:33.451989 | orchestrator | ok: [testbed-node-3] 2026-03-26 03:01:33.452000 | orchestrator | ok: [testbed-node-4] 2026-03-26 03:01:33.452011 | orchestrator | ok: [testbed-node-5] 2026-03-26 03:01:33.452021 | orchestrator | 2026-03-26 03:01:33.452032 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-26 03:01:33.452058 | orchestrator | Thursday 26 March 2026 03:00:42 +0000 (0:00:00.347) 0:00:28.326 ******** 2026-03-26 03:01:33.452069 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-26 03:01:33.452081 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-26 03:01:33.452091 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-26 03:01:33.452102 | orchestrator | 2026-03-26 03:01:33.452113 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-26 03:01:33.452124 | orchestrator | Thursday 26 March 2026 03:00:43 +0000 (0:00:00.839) 0:00:29.166 ******** 2026-03-26 03:01:33.452135 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-26 03:01:33.452148 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-26 03:01:33.452159 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-26 03:01:33.452169 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-26 03:01:33.452180 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-26 03:01:33.452191 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-26 03:01:33.452202 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-26 03:01:33.452213 | orchestrator | 2026-03-26 03:01:33.452223 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-26 03:01:33.452234 | orchestrator | Thursday 26 March 2026 03:00:44 +0000 (0:00:00.872) 0:00:30.038 ******** 2026-03-26 03:01:33.452245 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-26 03:01:33.452256 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-26 03:01:33.452266 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-26 03:01:33.452277 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-26 03:01:33.452296 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-26 03:01:33.452307 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-26 03:01:33.452318 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-26 03:01:33.452329 | orchestrator | 2026-03-26 03:01:33.452340 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2026-03-26 03:01:33.452350 | orchestrator | Thursday 26 March 2026 03:00:46 +0000 (0:00:01.762) 0:00:31.801 ******** 2026-03-26 03:01:33.452361 | orchestrator | skipping: [testbed-node-3] 2026-03-26 03:01:33.452372 | orchestrator | skipping: [testbed-node-4] 2026-03-26 03:01:33.452383 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2026-03-26 03:01:33.452394 | orchestrator | 2026-03-26 03:01:33.452404 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2026-03-26 03:01:33.452416 | orchestrator | Thursday 26 March 2026 03:00:47 +0000 (0:00:00.617) 0:00:32.418 ******** 2026-03-26 03:01:33.452429 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-26 03:01:33.452443 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-26 03:01:33.452460 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-26 03:01:33.452481 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-26 03:02:26.571144 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-26 03:02:26.571242 | orchestrator | 2026-03-26 03:02:26.571252 | orchestrator | TASK [generate keys] *********************************************************** 2026-03-26 03:02:26.571258 | orchestrator | Thursday 26 March 2026 03:01:33 +0000 (0:00:46.350) 0:01:18.768 ******** 2026-03-26 03:02:26.571263 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-26 03:02:26.571268 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-26 03:02:26.571272 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-26 03:02:26.571276 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-26 03:02:26.571281 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-26 03:02:26.571284 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-26 03:02:26.571289 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2026-03-26 03:02:26.571293 | orchestrator | 2026-03-26 03:02:26.571297 | orchestrator | TASK [get keys from monitors] ************************************************** 2026-03-26 03:02:26.571301 | orchestrator | Thursday 26 March 2026 03:01:57 +0000 (0:00:23.918) 0:01:42.687 ******** 2026-03-26 03:02:26.571305 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-26 03:02:26.571326 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-26 03:02:26.571331 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-26 03:02:26.571334 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-26 03:02:26.571338 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-26 03:02:26.571342 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-26 03:02:26.571347 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-26 03:02:26.571351 | orchestrator | 2026-03-26 03:02:26.571355 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2026-03-26 03:02:26.571359 | orchestrator | Thursday 26 March 2026 03:02:08 +0000 (0:00:11.575) 0:01:54.262 ******** 2026-03-26 03:02:26.571363 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-26 03:02:26.571367 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-26 03:02:26.571371 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-26 03:02:26.571375 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-26 03:02:26.571379 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-26 03:02:26.571383 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-26 03:02:26.571387 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-26 03:02:26.571391 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-26 03:02:26.571395 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-26 03:02:26.571399 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-26 03:02:26.571403 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-26 03:02:26.571407 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-26 03:02:26.571411 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-26 03:02:26.571415 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-26 03:02:26.571419 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-26 03:02:26.571423 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-26 03:02:26.571427 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-26 03:02:26.571431 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-26 03:02:26.571435 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2026-03-26 03:02:26.571440 | orchestrator | 2026-03-26 03:02:26.571444 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-26 03:02:26.571459 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-03-26 03:02:26.571465 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-03-26 03:02:26.571469 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-03-26 03:02:26.571474 | orchestrator | 2026-03-26 03:02:26.571478 | orchestrator | 2026-03-26 03:02:26.571481 | orchestrator | 2026-03-26 03:02:26.571496 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-26 03:02:26.571500 | orchestrator | Thursday 26 March 2026 03:02:26 +0000 (0:00:17.220) 0:02:11.483 ******** 2026-03-26 03:02:26.571504 | orchestrator | =============================================================================== 2026-03-26 03:02:26.571514 | orchestrator | create openstack pool(s) ----------------------------------------------- 46.35s 2026-03-26 03:02:26.571519 | orchestrator | generate keys ---------------------------------------------------------- 23.92s 2026-03-26 03:02:26.571523 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 17.22s 2026-03-26 03:02:26.571530 | orchestrator | get keys from monitors ------------------------------------------------- 11.58s 2026-03-26 03:02:26.571536 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.27s 2026-03-26 03:02:26.571543 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 1.76s 2026-03-26 03:02:26.571550 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.69s 2026-03-26 03:02:26.571556 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 1.17s 2026-03-26 03:02:26.571562 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 1.13s 2026-03-26 03:02:26.571568 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 1.00s 2026-03-26 03:02:26.571574 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.95s 2026-03-26 03:02:26.571581 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 0.87s 2026-03-26 03:02:26.571588 | orchestrator | ceph-facts : Set_fact rgw_instances ------------------------------------- 0.84s 2026-03-26 03:02:26.571595 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.80s 2026-03-26 03:02:26.571601 | orchestrator | ceph-facts : Get current fsid ------------------------------------------- 0.74s 2026-03-26 03:02:26.571608 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.73s 2026-03-26 03:02:26.571615 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.70s 2026-03-26 03:02:26.571620 | orchestrator | ceph-facts : Check for a ceph mon socket -------------------------------- 0.70s 2026-03-26 03:02:26.571624 | orchestrator | ceph-facts : Collect existed devices ------------------------------------ 0.69s 2026-03-26 03:02:26.571628 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.68s 2026-03-26 03:02:29.062592 | orchestrator | 2026-03-26 03:02:29 | INFO  | Task ba06c60f-34f0-4428-a23d-714319bfb192 (copy-ceph-keys) was prepared for execution. 2026-03-26 03:02:29.062719 | orchestrator | 2026-03-26 03:02:29 | INFO  | It takes a moment until task ba06c60f-34f0-4428-a23d-714319bfb192 (copy-ceph-keys) has been started and output is visible here. 2026-03-26 03:03:10.158717 | orchestrator | 2026-03-26 03:03:10.158837 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2026-03-26 03:03:10.158854 | orchestrator | 2026-03-26 03:03:10.158866 | orchestrator | TASK [Check if ceph keys exist] ************************************************ 2026-03-26 03:03:10.158877 | orchestrator | Thursday 26 March 2026 03:02:33 +0000 (0:00:00.164) 0:00:00.164 ******** 2026-03-26 03:03:10.158889 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-03-26 03:03:10.158902 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-26 03:03:10.158913 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-26 03:03:10.158923 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-03-26 03:03:10.158935 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-26 03:03:10.158946 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-03-26 03:03:10.158957 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-03-26 03:03:10.158967 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-03-26 03:03:10.158999 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-03-26 03:03:10.159011 | orchestrator | 2026-03-26 03:03:10.159022 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2026-03-26 03:03:10.159033 | orchestrator | Thursday 26 March 2026 03:02:38 +0000 (0:00:04.691) 0:00:04.855 ******** 2026-03-26 03:03:10.159044 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-03-26 03:03:10.159070 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-26 03:03:10.159082 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-26 03:03:10.159092 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-03-26 03:03:10.159103 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-26 03:03:10.159177 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-03-26 03:03:10.159190 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-03-26 03:03:10.159201 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-03-26 03:03:10.159212 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-03-26 03:03:10.159223 | orchestrator | 2026-03-26 03:03:10.159234 | orchestrator | TASK [Create share directory] ************************************************** 2026-03-26 03:03:10.159246 | orchestrator | Thursday 26 March 2026 03:02:42 +0000 (0:00:04.424) 0:00:09.280 ******** 2026-03-26 03:03:10.159261 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-26 03:03:10.159274 | orchestrator | 2026-03-26 03:03:10.159287 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2026-03-26 03:03:10.159300 | orchestrator | Thursday 26 March 2026 03:02:44 +0000 (0:00:01.072) 0:00:10.353 ******** 2026-03-26 03:03:10.159314 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2026-03-26 03:03:10.159328 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-03-26 03:03:10.159341 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-03-26 03:03:10.159355 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2026-03-26 03:03:10.159368 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-03-26 03:03:10.159381 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2026-03-26 03:03:10.159394 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2026-03-26 03:03:10.159407 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2026-03-26 03:03:10.159420 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2026-03-26 03:03:10.159433 | orchestrator | 2026-03-26 03:03:10.159445 | orchestrator | TASK [Check if target directories exist] *************************************** 2026-03-26 03:03:10.159459 | orchestrator | Thursday 26 March 2026 03:02:58 +0000 (0:00:14.271) 0:00:24.624 ******** 2026-03-26 03:03:10.159472 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/infrastructure/files/ceph) 2026-03-26 03:03:10.159484 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume) 2026-03-26 03:03:10.159495 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-03-26 03:03:10.159506 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-03-26 03:03:10.159536 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-03-26 03:03:10.159557 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-03-26 03:03:10.159568 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/glance) 2026-03-26 03:03:10.159579 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/gnocchi) 2026-03-26 03:03:10.159590 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/manila) 2026-03-26 03:03:10.159601 | orchestrator | 2026-03-26 03:03:10.159612 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2026-03-26 03:03:10.159623 | orchestrator | Thursday 26 March 2026 03:03:02 +0000 (0:00:04.262) 0:00:28.887 ******** 2026-03-26 03:03:10.159635 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2026-03-26 03:03:10.159647 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-03-26 03:03:10.159658 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-03-26 03:03:10.159669 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2026-03-26 03:03:10.159680 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-03-26 03:03:10.159691 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2026-03-26 03:03:10.159703 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2026-03-26 03:03:10.159713 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2026-03-26 03:03:10.159724 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2026-03-26 03:03:10.159735 | orchestrator | 2026-03-26 03:03:10.159747 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-26 03:03:10.159764 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-26 03:03:10.159776 | orchestrator | 2026-03-26 03:03:10.159787 | orchestrator | 2026-03-26 03:03:10.159798 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-26 03:03:10.159809 | orchestrator | Thursday 26 March 2026 03:03:09 +0000 (0:00:07.279) 0:00:36.166 ******** 2026-03-26 03:03:10.159820 | orchestrator | =============================================================================== 2026-03-26 03:03:10.159832 | orchestrator | Write ceph keys to the share directory --------------------------------- 14.27s 2026-03-26 03:03:10.159843 | orchestrator | Write ceph keys to the configuration directory -------------------------- 7.28s 2026-03-26 03:03:10.159854 | orchestrator | Check if ceph keys exist ------------------------------------------------ 4.69s 2026-03-26 03:03:10.159865 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.42s 2026-03-26 03:03:10.159876 | orchestrator | Check if target directories exist --------------------------------------- 4.26s 2026-03-26 03:03:10.159887 | orchestrator | Create share directory -------------------------------------------------- 1.07s 2026-03-26 03:03:22.672364 | orchestrator | 2026-03-26 03:03:22 | INFO  | Task 3617dc68-3c70-4fe1-b924-b0629be23217 (cephclient) was prepared for execution. 2026-03-26 03:03:22.672462 | orchestrator | 2026-03-26 03:03:22 | INFO  | It takes a moment until task 3617dc68-3c70-4fe1-b924-b0629be23217 (cephclient) has been started and output is visible here. 2026-03-26 03:04:26.054275 | orchestrator | 2026-03-26 03:04:26.054419 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-03-26 03:04:26.054447 | orchestrator | 2026-03-26 03:04:26.054468 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-03-26 03:04:26.054486 | orchestrator | Thursday 26 March 2026 03:03:27 +0000 (0:00:00.256) 0:00:00.256 ******** 2026-03-26 03:04:26.054506 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-03-26 03:04:26.054557 | orchestrator | 2026-03-26 03:04:26.054578 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-03-26 03:04:26.054599 | orchestrator | Thursday 26 March 2026 03:03:27 +0000 (0:00:00.265) 0:00:00.522 ******** 2026-03-26 03:04:26.054620 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-03-26 03:04:26.054640 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2026-03-26 03:04:26.054662 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-03-26 03:04:26.054682 | orchestrator | 2026-03-26 03:04:26.054701 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-03-26 03:04:26.054719 | orchestrator | Thursday 26 March 2026 03:03:28 +0000 (0:00:01.247) 0:00:01.769 ******** 2026-03-26 03:04:26.054739 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-03-26 03:04:26.054758 | orchestrator | 2026-03-26 03:04:26.054777 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-03-26 03:04:26.054797 | orchestrator | Thursday 26 March 2026 03:03:30 +0000 (0:00:01.495) 0:00:03.265 ******** 2026-03-26 03:04:26.054816 | orchestrator | changed: [testbed-manager] 2026-03-26 03:04:26.054835 | orchestrator | 2026-03-26 03:04:26.054847 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-03-26 03:04:26.054858 | orchestrator | Thursday 26 March 2026 03:03:31 +0000 (0:00:00.984) 0:00:04.249 ******** 2026-03-26 03:04:26.054870 | orchestrator | changed: [testbed-manager] 2026-03-26 03:04:26.054880 | orchestrator | 2026-03-26 03:04:26.054892 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-03-26 03:04:26.054903 | orchestrator | Thursday 26 March 2026 03:03:32 +0000 (0:00:00.973) 0:00:05.223 ******** 2026-03-26 03:04:26.054914 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2026-03-26 03:04:26.054925 | orchestrator | ok: [testbed-manager] 2026-03-26 03:04:26.054937 | orchestrator | 2026-03-26 03:04:26.054948 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-03-26 03:04:26.054959 | orchestrator | Thursday 26 March 2026 03:04:15 +0000 (0:00:43.450) 0:00:48.673 ******** 2026-03-26 03:04:26.054970 | orchestrator | changed: [testbed-manager] => (item=ceph) 2026-03-26 03:04:26.054982 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2026-03-26 03:04:26.054993 | orchestrator | changed: [testbed-manager] => (item=rados) 2026-03-26 03:04:26.055004 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2026-03-26 03:04:26.055015 | orchestrator | changed: [testbed-manager] => (item=rbd) 2026-03-26 03:04:26.055027 | orchestrator | 2026-03-26 03:04:26.055039 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-03-26 03:04:26.055050 | orchestrator | Thursday 26 March 2026 03:04:19 +0000 (0:00:04.299) 0:00:52.973 ******** 2026-03-26 03:04:26.055061 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-03-26 03:04:26.055072 | orchestrator | 2026-03-26 03:04:26.055083 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-03-26 03:04:26.055094 | orchestrator | Thursday 26 March 2026 03:04:20 +0000 (0:00:00.500) 0:00:53.473 ******** 2026-03-26 03:04:26.055105 | orchestrator | skipping: [testbed-manager] 2026-03-26 03:04:26.055116 | orchestrator | 2026-03-26 03:04:26.055127 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-03-26 03:04:26.055138 | orchestrator | Thursday 26 March 2026 03:04:20 +0000 (0:00:00.158) 0:00:53.631 ******** 2026-03-26 03:04:26.055149 | orchestrator | skipping: [testbed-manager] 2026-03-26 03:04:26.055160 | orchestrator | 2026-03-26 03:04:26.055199 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2026-03-26 03:04:26.055213 | orchestrator | Thursday 26 March 2026 03:04:21 +0000 (0:00:00.574) 0:00:54.205 ******** 2026-03-26 03:04:26.055240 | orchestrator | changed: [testbed-manager] 2026-03-26 03:04:26.055252 | orchestrator | 2026-03-26 03:04:26.055263 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2026-03-26 03:04:26.055288 | orchestrator | Thursday 26 March 2026 03:04:22 +0000 (0:00:01.680) 0:00:55.885 ******** 2026-03-26 03:04:26.055299 | orchestrator | changed: [testbed-manager] 2026-03-26 03:04:26.055310 | orchestrator | 2026-03-26 03:04:26.055321 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2026-03-26 03:04:26.055332 | orchestrator | Thursday 26 March 2026 03:04:23 +0000 (0:00:00.705) 0:00:56.591 ******** 2026-03-26 03:04:26.055342 | orchestrator | changed: [testbed-manager] 2026-03-26 03:04:26.055353 | orchestrator | 2026-03-26 03:04:26.055364 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2026-03-26 03:04:26.055375 | orchestrator | Thursday 26 March 2026 03:04:24 +0000 (0:00:00.622) 0:00:57.214 ******** 2026-03-26 03:04:26.055386 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-03-26 03:04:26.055397 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-03-26 03:04:26.055408 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-03-26 03:04:26.055418 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-03-26 03:04:26.055429 | orchestrator | 2026-03-26 03:04:26.055441 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-26 03:04:26.055452 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-26 03:04:26.055465 | orchestrator | 2026-03-26 03:04:26.055476 | orchestrator | 2026-03-26 03:04:26.055509 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-26 03:04:26.055521 | orchestrator | Thursday 26 March 2026 03:04:25 +0000 (0:00:01.594) 0:00:58.809 ******** 2026-03-26 03:04:26.055532 | orchestrator | =============================================================================== 2026-03-26 03:04:26.055543 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 43.45s 2026-03-26 03:04:26.055554 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.30s 2026-03-26 03:04:26.055565 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.68s 2026-03-26 03:04:26.055576 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.59s 2026-03-26 03:04:26.055587 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.50s 2026-03-26 03:04:26.055598 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.25s 2026-03-26 03:04:26.055609 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.98s 2026-03-26 03:04:26.055620 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.97s 2026-03-26 03:04:26.055631 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.71s 2026-03-26 03:04:26.055641 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.62s 2026-03-26 03:04:26.055652 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.57s 2026-03-26 03:04:26.055663 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.50s 2026-03-26 03:04:26.055674 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.27s 2026-03-26 03:04:26.055685 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.16s 2026-03-26 03:04:28.680583 | orchestrator | 2026-03-26 03:04:28 | INFO  | Task ccda482a-8362-4afe-bbda-d062fa374a95 (ceph-bootstrap-dashboard) was prepared for execution. 2026-03-26 03:04:28.680805 | orchestrator | 2026-03-26 03:04:28 | INFO  | It takes a moment until task ccda482a-8362-4afe-bbda-d062fa374a95 (ceph-bootstrap-dashboard) has been started and output is visible here. 2026-03-26 03:05:50.454822 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-26 03:05:50.454943 | orchestrator | 2.16.14 2026-03-26 03:05:50.454961 | orchestrator | 2026-03-26 03:05:50.454975 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2026-03-26 03:05:50.454988 | orchestrator | 2026-03-26 03:05:50.454999 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2026-03-26 03:05:50.455036 | orchestrator | Thursday 26 March 2026 03:04:33 +0000 (0:00:00.273) 0:00:00.273 ******** 2026-03-26 03:05:50.455048 | orchestrator | changed: [testbed-manager] 2026-03-26 03:05:50.455060 | orchestrator | 2026-03-26 03:05:50.455072 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2026-03-26 03:05:50.455082 | orchestrator | Thursday 26 March 2026 03:04:35 +0000 (0:00:01.841) 0:00:02.115 ******** 2026-03-26 03:05:50.455093 | orchestrator | changed: [testbed-manager] 2026-03-26 03:05:50.455104 | orchestrator | 2026-03-26 03:05:50.455115 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2026-03-26 03:05:50.455126 | orchestrator | Thursday 26 March 2026 03:04:36 +0000 (0:00:01.161) 0:00:03.276 ******** 2026-03-26 03:05:50.455137 | orchestrator | changed: [testbed-manager] 2026-03-26 03:05:50.455148 | orchestrator | 2026-03-26 03:05:50.455158 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2026-03-26 03:05:50.455169 | orchestrator | Thursday 26 March 2026 03:04:37 +0000 (0:00:01.144) 0:00:04.420 ******** 2026-03-26 03:05:50.455180 | orchestrator | changed: [testbed-manager] 2026-03-26 03:05:50.455191 | orchestrator | 2026-03-26 03:05:50.455201 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2026-03-26 03:05:50.455212 | orchestrator | Thursday 26 March 2026 03:04:38 +0000 (0:00:01.271) 0:00:05.692 ******** 2026-03-26 03:05:50.455277 | orchestrator | changed: [testbed-manager] 2026-03-26 03:05:50.455296 | orchestrator | 2026-03-26 03:05:50.455315 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2026-03-26 03:05:50.455333 | orchestrator | Thursday 26 March 2026 03:04:40 +0000 (0:00:01.123) 0:00:06.816 ******** 2026-03-26 03:05:50.455372 | orchestrator | changed: [testbed-manager] 2026-03-26 03:05:50.455391 | orchestrator | 2026-03-26 03:05:50.455409 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2026-03-26 03:05:50.455425 | orchestrator | Thursday 26 March 2026 03:04:41 +0000 (0:00:01.124) 0:00:07.941 ******** 2026-03-26 03:05:50.455443 | orchestrator | changed: [testbed-manager] 2026-03-26 03:05:50.455462 | orchestrator | 2026-03-26 03:05:50.455481 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2026-03-26 03:05:50.455500 | orchestrator | Thursday 26 March 2026 03:04:43 +0000 (0:00:02.107) 0:00:10.048 ******** 2026-03-26 03:05:50.455519 | orchestrator | changed: [testbed-manager] 2026-03-26 03:05:50.455539 | orchestrator | 2026-03-26 03:05:50.455558 | orchestrator | TASK [Create admin user] ******************************************************* 2026-03-26 03:05:50.455576 | orchestrator | Thursday 26 March 2026 03:04:44 +0000 (0:00:01.337) 0:00:11.386 ******** 2026-03-26 03:05:50.455587 | orchestrator | changed: [testbed-manager] 2026-03-26 03:05:50.455598 | orchestrator | 2026-03-26 03:05:50.455609 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2026-03-26 03:05:50.455620 | orchestrator | Thursday 26 March 2026 03:05:25 +0000 (0:00:40.591) 0:00:51.978 ******** 2026-03-26 03:05:50.455631 | orchestrator | skipping: [testbed-manager] 2026-03-26 03:05:50.455642 | orchestrator | 2026-03-26 03:05:50.455653 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-03-26 03:05:50.455663 | orchestrator | 2026-03-26 03:05:50.455674 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-03-26 03:05:50.455685 | orchestrator | Thursday 26 March 2026 03:05:25 +0000 (0:00:00.170) 0:00:52.149 ******** 2026-03-26 03:05:50.455696 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:05:50.455707 | orchestrator | 2026-03-26 03:05:50.455718 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-03-26 03:05:50.455729 | orchestrator | 2026-03-26 03:05:50.455740 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-03-26 03:05:50.455750 | orchestrator | Thursday 26 March 2026 03:05:37 +0000 (0:00:12.040) 0:01:04.189 ******** 2026-03-26 03:05:50.455761 | orchestrator | changed: [testbed-node-1] 2026-03-26 03:05:50.455772 | orchestrator | 2026-03-26 03:05:50.455783 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-03-26 03:05:50.455805 | orchestrator | 2026-03-26 03:05:50.455816 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-03-26 03:05:50.455826 | orchestrator | Thursday 26 March 2026 03:05:38 +0000 (0:00:01.252) 0:01:05.441 ******** 2026-03-26 03:05:50.455838 | orchestrator | changed: [testbed-node-2] 2026-03-26 03:05:50.455849 | orchestrator | 2026-03-26 03:05:50.455860 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-26 03:05:50.455872 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-26 03:05:50.455884 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-26 03:05:50.455896 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-26 03:05:50.455907 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-26 03:05:50.455918 | orchestrator | 2026-03-26 03:05:50.455929 | orchestrator | 2026-03-26 03:05:50.455939 | orchestrator | 2026-03-26 03:05:50.455950 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-26 03:05:50.455961 | orchestrator | Thursday 26 March 2026 03:05:50 +0000 (0:00:11.365) 0:01:16.807 ******** 2026-03-26 03:05:50.455972 | orchestrator | =============================================================================== 2026-03-26 03:05:50.455983 | orchestrator | Create admin user ------------------------------------------------------ 40.59s 2026-03-26 03:05:50.456014 | orchestrator | Restart ceph manager service ------------------------------------------- 24.66s 2026-03-26 03:05:50.456026 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 2.11s 2026-03-26 03:05:50.456037 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.84s 2026-03-26 03:05:50.456048 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.34s 2026-03-26 03:05:50.456059 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.27s 2026-03-26 03:05:50.456069 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.16s 2026-03-26 03:05:50.456080 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.14s 2026-03-26 03:05:50.456091 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.12s 2026-03-26 03:05:50.456102 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.12s 2026-03-26 03:05:50.456113 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.17s 2026-03-26 03:05:50.811439 | orchestrator | + sh -c /opt/configuration/scripts/deploy/300-openstack.sh 2026-03-26 03:05:52.953876 | orchestrator | 2026-03-26 03:05:52 | INFO  | Task 5d3630be-9a4e-48be-9e20-a75775b77781 (keystone) was prepared for execution. 2026-03-26 03:05:52.953977 | orchestrator | 2026-03-26 03:05:52 | INFO  | It takes a moment until task 5d3630be-9a4e-48be-9e20-a75775b77781 (keystone) has been started and output is visible here. 2026-03-26 03:06:00.420124 | orchestrator | 2026-03-26 03:06:00.420282 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-26 03:06:00.420311 | orchestrator | 2026-03-26 03:06:00.420327 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-26 03:06:00.420362 | orchestrator | Thursday 26 March 2026 03:05:57 +0000 (0:00:00.282) 0:00:00.282 ******** 2026-03-26 03:06:00.420379 | orchestrator | ok: [testbed-node-0] 2026-03-26 03:06:00.420395 | orchestrator | ok: [testbed-node-1] 2026-03-26 03:06:00.420410 | orchestrator | ok: [testbed-node-2] 2026-03-26 03:06:00.420425 | orchestrator | 2026-03-26 03:06:00.420440 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-26 03:06:00.420455 | orchestrator | Thursday 26 March 2026 03:05:57 +0000 (0:00:00.343) 0:00:00.625 ******** 2026-03-26 03:06:00.420496 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-03-26 03:06:00.420512 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-03-26 03:06:00.420527 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-03-26 03:06:00.420541 | orchestrator | 2026-03-26 03:06:00.420556 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2026-03-26 03:06:00.420572 | orchestrator | 2026-03-26 03:06:00.420586 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-26 03:06:00.420601 | orchestrator | Thursday 26 March 2026 03:05:58 +0000 (0:00:00.442) 0:00:01.067 ******** 2026-03-26 03:06:00.420616 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 03:06:00.420633 | orchestrator | 2026-03-26 03:06:00.420648 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2026-03-26 03:06:00.420664 | orchestrator | Thursday 26 March 2026 03:05:58 +0000 (0:00:00.605) 0:00:01.673 ******** 2026-03-26 03:06:00.420687 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-26 03:06:00.420710 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-26 03:06:00.420752 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-26 03:06:00.420773 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-26 03:06:00.420784 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-26 03:06:00.420794 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-26 03:06:00.420803 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-26 03:06:00.420812 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-26 03:06:00.420822 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-26 03:06:00.420836 | orchestrator | 2026-03-26 03:06:00.420846 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2026-03-26 03:06:00.420861 | orchestrator | Thursday 26 March 2026 03:06:00 +0000 (0:00:01.621) 0:00:03.294 ******** 2026-03-26 03:06:06.635546 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:06:06.635638 | orchestrator | 2026-03-26 03:06:06.635648 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2026-03-26 03:06:06.635671 | orchestrator | Thursday 26 March 2026 03:06:00 +0000 (0:00:00.323) 0:00:03.618 ******** 2026-03-26 03:06:06.635678 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:06:06.635685 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:06:06.635692 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:06:06.635699 | orchestrator | 2026-03-26 03:06:06.635707 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2026-03-26 03:06:06.635713 | orchestrator | Thursday 26 March 2026 03:06:01 +0000 (0:00:00.342) 0:00:03.961 ******** 2026-03-26 03:06:06.635719 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-26 03:06:06.635726 | orchestrator | 2026-03-26 03:06:06.635732 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-26 03:06:06.635738 | orchestrator | Thursday 26 March 2026 03:06:01 +0000 (0:00:00.874) 0:00:04.835 ******** 2026-03-26 03:06:06.635745 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 03:06:06.635752 | orchestrator | 2026-03-26 03:06:06.635758 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2026-03-26 03:06:06.635764 | orchestrator | Thursday 26 March 2026 03:06:02 +0000 (0:00:00.623) 0:00:05.459 ******** 2026-03-26 03:06:06.635775 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-26 03:06:06.635785 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-26 03:06:06.635793 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-26 03:06:06.635836 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-26 03:06:06.635846 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-26 03:06:06.635853 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-26 03:06:06.635860 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-26 03:06:06.635867 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-26 03:06:06.635880 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-26 03:06:06.635887 | orchestrator | 2026-03-26 03:06:06.635894 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2026-03-26 03:06:06.635901 | orchestrator | Thursday 26 March 2026 03:06:06 +0000 (0:00:03.477) 0:00:08.936 ******** 2026-03-26 03:06:06.635914 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-26 03:06:07.520786 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-26 03:06:07.520899 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-26 03:06:07.520908 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:06:07.520915 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-26 03:06:07.520935 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-26 03:06:07.520942 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-26 03:06:07.520946 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:06:07.520964 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-26 03:06:07.520969 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-26 03:06:07.520973 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-26 03:06:07.520981 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:06:07.520985 | orchestrator | 2026-03-26 03:06:07.520989 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2026-03-26 03:06:07.520995 | orchestrator | Thursday 26 March 2026 03:06:06 +0000 (0:00:00.583) 0:00:09.520 ******** 2026-03-26 03:06:07.520999 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-26 03:06:07.521006 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-26 03:06:07.521015 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-26 03:06:10.849282 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:06:10.849372 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-26 03:06:10.849385 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-26 03:06:10.849410 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-26 03:06:10.849417 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:06:10.849434 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-26 03:06:10.849441 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-26 03:06:10.849459 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-26 03:06:10.849465 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:06:10.849471 | orchestrator | 2026-03-26 03:06:10.849477 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2026-03-26 03:06:10.849484 | orchestrator | Thursday 26 March 2026 03:06:07 +0000 (0:00:00.888) 0:00:10.408 ******** 2026-03-26 03:06:10.849490 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-26 03:06:10.849501 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-26 03:06:10.849512 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-26 03:06:10.849526 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-26 03:06:15.565997 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-26 03:06:15.566212 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-26 03:06:15.566341 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-26 03:06:15.566359 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-26 03:06:15.566387 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-26 03:06:15.566400 | orchestrator | 2026-03-26 03:06:15.566414 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2026-03-26 03:06:15.566427 | orchestrator | Thursday 26 March 2026 03:06:10 +0000 (0:00:03.322) 0:00:13.730 ******** 2026-03-26 03:06:15.566461 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-26 03:06:15.566477 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-26 03:06:15.566502 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-26 03:06:15.566517 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-26 03:06:15.566537 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-26 03:06:15.566560 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-26 03:06:19.282900 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-26 03:06:19.283000 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-26 03:06:19.283009 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-26 03:06:19.283016 | orchestrator | 2026-03-26 03:06:19.283024 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2026-03-26 03:06:19.283032 | orchestrator | Thursday 26 March 2026 03:06:15 +0000 (0:00:04.719) 0:00:18.450 ******** 2026-03-26 03:06:19.283038 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:06:19.283045 | orchestrator | changed: [testbed-node-1] 2026-03-26 03:06:19.283051 | orchestrator | changed: [testbed-node-2] 2026-03-26 03:06:19.283057 | orchestrator | 2026-03-26 03:06:19.283063 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2026-03-26 03:06:19.283069 | orchestrator | Thursday 26 March 2026 03:06:16 +0000 (0:00:01.429) 0:00:19.880 ******** 2026-03-26 03:06:19.283075 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:06:19.283080 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:06:19.283086 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:06:19.283092 | orchestrator | 2026-03-26 03:06:19.283098 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2026-03-26 03:06:19.283104 | orchestrator | Thursday 26 March 2026 03:06:17 +0000 (0:00:00.806) 0:00:20.686 ******** 2026-03-26 03:06:19.283110 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:06:19.283116 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:06:19.283122 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:06:19.283127 | orchestrator | 2026-03-26 03:06:19.283146 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2026-03-26 03:06:19.283152 | orchestrator | Thursday 26 March 2026 03:06:18 +0000 (0:00:00.569) 0:00:21.256 ******** 2026-03-26 03:06:19.283158 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:06:19.283164 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:06:19.283170 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:06:19.283176 | orchestrator | 2026-03-26 03:06:19.283182 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2026-03-26 03:06:19.283188 | orchestrator | Thursday 26 March 2026 03:06:18 +0000 (0:00:00.323) 0:00:21.579 ******** 2026-03-26 03:06:19.283209 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-26 03:06:19.283223 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-26 03:06:19.283268 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-26 03:06:19.283276 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:06:19.283282 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-26 03:06:19.283293 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-26 03:06:19.283299 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-26 03:06:19.283313 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:06:19.283325 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-26 03:06:38.744302 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-26 03:06:38.744433 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-26 03:06:38.744454 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:06:38.744470 | orchestrator | 2026-03-26 03:06:38.744484 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-26 03:06:38.744501 | orchestrator | Thursday 26 March 2026 03:06:19 +0000 (0:00:00.588) 0:00:22.168 ******** 2026-03-26 03:06:38.744515 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:06:38.744529 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:06:38.744542 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:06:38.744556 | orchestrator | 2026-03-26 03:06:38.744567 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2026-03-26 03:06:38.744576 | orchestrator | Thursday 26 March 2026 03:06:19 +0000 (0:00:00.309) 0:00:22.477 ******** 2026-03-26 03:06:38.744584 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-03-26 03:06:38.744593 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-03-26 03:06:38.744625 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-03-26 03:06:38.744633 | orchestrator | 2026-03-26 03:06:38.744654 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2026-03-26 03:06:38.744663 | orchestrator | Thursday 26 March 2026 03:06:21 +0000 (0:00:01.859) 0:00:24.336 ******** 2026-03-26 03:06:38.744671 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-26 03:06:38.744679 | orchestrator | 2026-03-26 03:06:38.744687 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2026-03-26 03:06:38.744695 | orchestrator | Thursday 26 March 2026 03:06:22 +0000 (0:00:00.953) 0:00:25.290 ******** 2026-03-26 03:06:38.744702 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:06:38.744710 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:06:38.744718 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:06:38.744726 | orchestrator | 2026-03-26 03:06:38.744734 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2026-03-26 03:06:38.744742 | orchestrator | Thursday 26 March 2026 03:06:22 +0000 (0:00:00.601) 0:00:25.891 ******** 2026-03-26 03:06:38.744750 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-26 03:06:38.744758 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-26 03:06:38.744766 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-26 03:06:38.744774 | orchestrator | 2026-03-26 03:06:38.744781 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2026-03-26 03:06:38.744806 | orchestrator | Thursday 26 March 2026 03:06:24 +0000 (0:00:01.090) 0:00:26.982 ******** 2026-03-26 03:06:38.744824 | orchestrator | ok: [testbed-node-0] 2026-03-26 03:06:38.744834 | orchestrator | ok: [testbed-node-1] 2026-03-26 03:06:38.744843 | orchestrator | ok: [testbed-node-2] 2026-03-26 03:06:38.744852 | orchestrator | 2026-03-26 03:06:38.744862 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2026-03-26 03:06:38.744871 | orchestrator | Thursday 26 March 2026 03:06:24 +0000 (0:00:00.565) 0:00:27.547 ******** 2026-03-26 03:06:38.744880 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-03-26 03:06:38.744890 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-03-26 03:06:38.744900 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-03-26 03:06:38.744910 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-03-26 03:06:38.744919 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-03-26 03:06:38.744933 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-03-26 03:06:38.744952 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-03-26 03:06:38.744971 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-03-26 03:06:38.745004 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-03-26 03:06:38.745020 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-03-26 03:06:38.745032 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-03-26 03:06:38.745045 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-03-26 03:06:38.745056 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-03-26 03:06:38.745067 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-03-26 03:06:38.745079 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-03-26 03:06:38.745093 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-26 03:06:38.745117 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-26 03:06:38.745130 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-26 03:06:38.745143 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-26 03:06:38.745157 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-26 03:06:38.745185 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-26 03:06:38.745194 | orchestrator | 2026-03-26 03:06:38.745211 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2026-03-26 03:06:38.745220 | orchestrator | Thursday 26 March 2026 03:06:33 +0000 (0:00:09.154) 0:00:36.701 ******** 2026-03-26 03:06:38.745227 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-26 03:06:38.745235 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-26 03:06:38.745300 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-26 03:06:38.745322 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-26 03:06:38.745331 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-26 03:06:38.745338 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-26 03:06:38.745346 | orchestrator | 2026-03-26 03:06:38.745354 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2026-03-26 03:06:38.745369 | orchestrator | Thursday 26 March 2026 03:06:36 +0000 (0:00:02.672) 0:00:39.373 ******** 2026-03-26 03:06:38.745381 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-26 03:06:38.745400 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-26 03:08:19.954829 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-26 03:08:19.954966 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-26 03:08:19.955002 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-26 03:08:19.955015 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-26 03:08:19.955027 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-26 03:08:19.955049 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-26 03:08:19.955062 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-26 03:08:19.955068 | orchestrator | 2026-03-26 03:08:19.955139 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-26 03:08:19.955156 | orchestrator | Thursday 26 March 2026 03:06:38 +0000 (0:00:02.245) 0:00:41.619 ******** 2026-03-26 03:08:19.955182 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:08:19.955201 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:08:19.955211 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:08:19.955222 | orchestrator | 2026-03-26 03:08:19.955231 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2026-03-26 03:08:19.955242 | orchestrator | Thursday 26 March 2026 03:06:39 +0000 (0:00:00.588) 0:00:42.207 ******** 2026-03-26 03:08:19.955248 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:08:19.955254 | orchestrator | 2026-03-26 03:08:19.955260 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2026-03-26 03:08:19.955266 | orchestrator | Thursday 26 March 2026 03:06:41 +0000 (0:00:02.362) 0:00:44.570 ******** 2026-03-26 03:08:19.955272 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:08:19.955278 | orchestrator | 2026-03-26 03:08:19.955300 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2026-03-26 03:08:19.955307 | orchestrator | Thursday 26 March 2026 03:06:43 +0000 (0:00:02.231) 0:00:46.802 ******** 2026-03-26 03:08:19.955313 | orchestrator | ok: [testbed-node-1] 2026-03-26 03:08:19.955319 | orchestrator | ok: [testbed-node-0] 2026-03-26 03:08:19.955325 | orchestrator | ok: [testbed-node-2] 2026-03-26 03:08:19.955331 | orchestrator | 2026-03-26 03:08:19.955337 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2026-03-26 03:08:19.955343 | orchestrator | Thursday 26 March 2026 03:06:44 +0000 (0:00:00.871) 0:00:47.673 ******** 2026-03-26 03:08:19.955349 | orchestrator | ok: [testbed-node-0] 2026-03-26 03:08:19.955357 | orchestrator | ok: [testbed-node-1] 2026-03-26 03:08:19.955364 | orchestrator | ok: [testbed-node-2] 2026-03-26 03:08:19.955370 | orchestrator | 2026-03-26 03:08:19.955377 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2026-03-26 03:08:19.955392 | orchestrator | Thursday 26 March 2026 03:06:45 +0000 (0:00:00.388) 0:00:48.062 ******** 2026-03-26 03:08:19.955400 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:08:19.955407 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:08:19.955415 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:08:19.955421 | orchestrator | 2026-03-26 03:08:19.955428 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2026-03-26 03:08:19.955435 | orchestrator | Thursday 26 March 2026 03:06:45 +0000 (0:00:00.582) 0:00:48.644 ******** 2026-03-26 03:08:19.955442 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:08:19.955449 | orchestrator | 2026-03-26 03:08:19.955456 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2026-03-26 03:08:19.955463 | orchestrator | Thursday 26 March 2026 03:07:00 +0000 (0:00:15.232) 0:01:03.876 ******** 2026-03-26 03:08:19.955470 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:08:19.955477 | orchestrator | 2026-03-26 03:08:19.955484 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-03-26 03:08:19.955491 | orchestrator | Thursday 26 March 2026 03:07:12 +0000 (0:00:11.133) 0:01:15.010 ******** 2026-03-26 03:08:19.955505 | orchestrator | 2026-03-26 03:08:19.955512 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-03-26 03:08:19.955525 | orchestrator | Thursday 26 March 2026 03:07:12 +0000 (0:00:00.085) 0:01:15.096 ******** 2026-03-26 03:08:19.955538 | orchestrator | 2026-03-26 03:08:19.955547 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-03-26 03:08:19.955556 | orchestrator | Thursday 26 March 2026 03:07:12 +0000 (0:00:00.072) 0:01:15.169 ******** 2026-03-26 03:08:19.955565 | orchestrator | 2026-03-26 03:08:19.955574 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2026-03-26 03:08:19.955583 | orchestrator | Thursday 26 March 2026 03:07:12 +0000 (0:00:00.072) 0:01:15.241 ******** 2026-03-26 03:08:19.955591 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:08:19.955600 | orchestrator | changed: [testbed-node-2] 2026-03-26 03:08:19.955610 | orchestrator | changed: [testbed-node-1] 2026-03-26 03:08:19.955620 | orchestrator | 2026-03-26 03:08:19.955630 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2026-03-26 03:08:19.955639 | orchestrator | Thursday 26 March 2026 03:08:01 +0000 (0:00:49.106) 0:02:04.348 ******** 2026-03-26 03:08:19.955649 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:08:19.955658 | orchestrator | changed: [testbed-node-1] 2026-03-26 03:08:19.955667 | orchestrator | changed: [testbed-node-2] 2026-03-26 03:08:19.955676 | orchestrator | 2026-03-26 03:08:19.955682 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2026-03-26 03:08:19.955688 | orchestrator | Thursday 26 March 2026 03:08:11 +0000 (0:00:10.443) 0:02:14.791 ******** 2026-03-26 03:08:19.955694 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:08:19.955700 | orchestrator | changed: [testbed-node-1] 2026-03-26 03:08:19.955706 | orchestrator | changed: [testbed-node-2] 2026-03-26 03:08:19.955711 | orchestrator | 2026-03-26 03:08:19.955717 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-26 03:08:19.955723 | orchestrator | Thursday 26 March 2026 03:08:19 +0000 (0:00:07.404) 0:02:22.195 ******** 2026-03-26 03:08:19.955737 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 03:09:11.219610 | orchestrator | 2026-03-26 03:09:11.219741 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2026-03-26 03:09:11.219760 | orchestrator | Thursday 26 March 2026 03:08:19 +0000 (0:00:00.644) 0:02:22.840 ******** 2026-03-26 03:09:11.219773 | orchestrator | ok: [testbed-node-1] 2026-03-26 03:09:11.219786 | orchestrator | ok: [testbed-node-0] 2026-03-26 03:09:11.219798 | orchestrator | ok: [testbed-node-2] 2026-03-26 03:09:11.219809 | orchestrator | 2026-03-26 03:09:11.219820 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2026-03-26 03:09:11.219832 | orchestrator | Thursday 26 March 2026 03:08:21 +0000 (0:00:01.178) 0:02:24.018 ******** 2026-03-26 03:09:11.219843 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:09:11.219855 | orchestrator | 2026-03-26 03:09:11.219866 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2026-03-26 03:09:11.219877 | orchestrator | Thursday 26 March 2026 03:08:22 +0000 (0:00:01.778) 0:02:25.797 ******** 2026-03-26 03:09:11.219888 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2026-03-26 03:09:11.219899 | orchestrator | 2026-03-26 03:09:11.219910 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2026-03-26 03:09:11.219921 | orchestrator | Thursday 26 March 2026 03:08:34 +0000 (0:00:11.928) 0:02:37.725 ******** 2026-03-26 03:09:11.219933 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2026-03-26 03:09:11.219943 | orchestrator | 2026-03-26 03:09:11.219954 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2026-03-26 03:09:11.219965 | orchestrator | Thursday 26 March 2026 03:08:59 +0000 (0:00:24.419) 0:03:02.145 ******** 2026-03-26 03:09:11.219976 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2026-03-26 03:09:11.220027 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2026-03-26 03:09:11.220040 | orchestrator | 2026-03-26 03:09:11.220051 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2026-03-26 03:09:11.220063 | orchestrator | Thursday 26 March 2026 03:09:06 +0000 (0:00:06.757) 0:03:08.902 ******** 2026-03-26 03:09:11.220073 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:09:11.220085 | orchestrator | 2026-03-26 03:09:11.220095 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2026-03-26 03:09:11.220107 | orchestrator | Thursday 26 March 2026 03:09:06 +0000 (0:00:00.134) 0:03:09.036 ******** 2026-03-26 03:09:11.220120 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:09:11.220133 | orchestrator | 2026-03-26 03:09:11.220146 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2026-03-26 03:09:11.220158 | orchestrator | Thursday 26 March 2026 03:09:06 +0000 (0:00:00.137) 0:03:09.174 ******** 2026-03-26 03:09:11.220171 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:09:11.220185 | orchestrator | 2026-03-26 03:09:11.220212 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2026-03-26 03:09:11.220226 | orchestrator | Thursday 26 March 2026 03:09:06 +0000 (0:00:00.139) 0:03:09.313 ******** 2026-03-26 03:09:11.220239 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:09:11.220252 | orchestrator | 2026-03-26 03:09:11.220264 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2026-03-26 03:09:11.220275 | orchestrator | Thursday 26 March 2026 03:09:06 +0000 (0:00:00.557) 0:03:09.871 ******** 2026-03-26 03:09:11.220286 | orchestrator | ok: [testbed-node-0] 2026-03-26 03:09:11.220297 | orchestrator | 2026-03-26 03:09:11.220331 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-26 03:09:11.220343 | orchestrator | Thursday 26 March 2026 03:09:10 +0000 (0:00:03.281) 0:03:13.153 ******** 2026-03-26 03:09:11.220354 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:09:11.220365 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:09:11.220376 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:09:11.220387 | orchestrator | 2026-03-26 03:09:11.220398 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-26 03:09:11.220410 | orchestrator | testbed-node-0 : ok=33  changed=19  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-26 03:09:11.220423 | orchestrator | testbed-node-1 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-26 03:09:11.220434 | orchestrator | testbed-node-2 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-26 03:09:11.220445 | orchestrator | 2026-03-26 03:09:11.220456 | orchestrator | 2026-03-26 03:09:11.220467 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-26 03:09:11.220478 | orchestrator | Thursday 26 March 2026 03:09:10 +0000 (0:00:00.512) 0:03:13.665 ******** 2026-03-26 03:09:11.220489 | orchestrator | =============================================================================== 2026-03-26 03:09:11.220500 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 49.11s 2026-03-26 03:09:11.220511 | orchestrator | service-ks-register : keystone | Creating services --------------------- 24.42s 2026-03-26 03:09:11.220522 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 15.23s 2026-03-26 03:09:11.220533 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 11.93s 2026-03-26 03:09:11.220544 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 11.13s 2026-03-26 03:09:11.220555 | orchestrator | keystone : Restart keystone-fernet container --------------------------- 10.44s 2026-03-26 03:09:11.220566 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 9.15s 2026-03-26 03:09:11.220577 | orchestrator | keystone : Restart keystone container ----------------------------------- 7.40s 2026-03-26 03:09:11.220595 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 6.76s 2026-03-26 03:09:11.220623 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 4.72s 2026-03-26 03:09:11.220635 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.48s 2026-03-26 03:09:11.220646 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.32s 2026-03-26 03:09:11.220657 | orchestrator | keystone : Creating default user role ----------------------------------- 3.28s 2026-03-26 03:09:11.220668 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.67s 2026-03-26 03:09:11.220679 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.36s 2026-03-26 03:09:11.220690 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.25s 2026-03-26 03:09:11.220701 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.23s 2026-03-26 03:09:11.220712 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 1.86s 2026-03-26 03:09:11.220723 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.78s 2026-03-26 03:09:11.220734 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 1.62s 2026-03-26 03:09:13.865114 | orchestrator | 2026-03-26 03:09:13 | INFO  | Task fb909599-55bc-4ffe-8cce-71be6ebf1872 (placement) was prepared for execution. 2026-03-26 03:09:13.865219 | orchestrator | 2026-03-26 03:09:13 | INFO  | It takes a moment until task fb909599-55bc-4ffe-8cce-71be6ebf1872 (placement) has been started and output is visible here. 2026-03-26 03:09:49.447313 | orchestrator | 2026-03-26 03:09:49.447550 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-26 03:09:49.447569 | orchestrator | 2026-03-26 03:09:49.447582 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-26 03:09:49.447595 | orchestrator | Thursday 26 March 2026 03:09:18 +0000 (0:00:00.270) 0:00:00.270 ******** 2026-03-26 03:09:49.447607 | orchestrator | ok: [testbed-node-0] 2026-03-26 03:09:49.447621 | orchestrator | ok: [testbed-node-1] 2026-03-26 03:09:49.447633 | orchestrator | ok: [testbed-node-2] 2026-03-26 03:09:49.447647 | orchestrator | 2026-03-26 03:09:49.447656 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-26 03:09:49.447664 | orchestrator | Thursday 26 March 2026 03:09:18 +0000 (0:00:00.316) 0:00:00.587 ******** 2026-03-26 03:09:49.447672 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2026-03-26 03:09:49.447680 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2026-03-26 03:09:49.447688 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2026-03-26 03:09:49.447695 | orchestrator | 2026-03-26 03:09:49.447718 | orchestrator | PLAY [Apply role placement] **************************************************** 2026-03-26 03:09:49.447725 | orchestrator | 2026-03-26 03:09:49.447733 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-03-26 03:09:49.447740 | orchestrator | Thursday 26 March 2026 03:09:19 +0000 (0:00:00.474) 0:00:01.061 ******** 2026-03-26 03:09:49.447762 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 03:09:49.447773 | orchestrator | 2026-03-26 03:09:49.447782 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2026-03-26 03:09:49.447790 | orchestrator | Thursday 26 March 2026 03:09:19 +0000 (0:00:00.581) 0:00:01.642 ******** 2026-03-26 03:09:49.447799 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2026-03-26 03:09:49.447808 | orchestrator | 2026-03-26 03:09:49.447816 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2026-03-26 03:09:49.447824 | orchestrator | Thursday 26 March 2026 03:09:23 +0000 (0:00:03.627) 0:00:05.269 ******** 2026-03-26 03:09:49.447833 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2026-03-26 03:09:49.447861 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2026-03-26 03:09:49.447870 | orchestrator | 2026-03-26 03:09:49.447880 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2026-03-26 03:09:49.447889 | orchestrator | Thursday 26 March 2026 03:09:29 +0000 (0:00:06.439) 0:00:11.709 ******** 2026-03-26 03:09:49.447897 | orchestrator | changed: [testbed-node-0] => (item=service) 2026-03-26 03:09:49.447906 | orchestrator | 2026-03-26 03:09:49.447914 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2026-03-26 03:09:49.447922 | orchestrator | Thursday 26 March 2026 03:09:33 +0000 (0:00:03.547) 0:00:15.256 ******** 2026-03-26 03:09:49.447931 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-26 03:09:49.447940 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2026-03-26 03:09:49.447948 | orchestrator | 2026-03-26 03:09:49.447956 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2026-03-26 03:09:49.447965 | orchestrator | Thursday 26 March 2026 03:09:37 +0000 (0:00:04.251) 0:00:19.507 ******** 2026-03-26 03:09:49.447973 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-26 03:09:49.447982 | orchestrator | 2026-03-26 03:09:49.447991 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2026-03-26 03:09:49.447999 | orchestrator | Thursday 26 March 2026 03:09:40 +0000 (0:00:03.104) 0:00:22.612 ******** 2026-03-26 03:09:49.448008 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2026-03-26 03:09:49.448016 | orchestrator | 2026-03-26 03:09:49.448025 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-03-26 03:09:49.448033 | orchestrator | Thursday 26 March 2026 03:09:44 +0000 (0:00:04.224) 0:00:26.837 ******** 2026-03-26 03:09:49.448042 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:09:49.448050 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:09:49.448058 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:09:49.448066 | orchestrator | 2026-03-26 03:09:49.448075 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2026-03-26 03:09:49.448083 | orchestrator | Thursday 26 March 2026 03:09:45 +0000 (0:00:00.312) 0:00:27.150 ******** 2026-03-26 03:09:49.448095 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-26 03:09:49.448138 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-26 03:09:49.448155 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-26 03:09:49.448163 | orchestrator | 2026-03-26 03:09:49.448172 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2026-03-26 03:09:49.448179 | orchestrator | Thursday 26 March 2026 03:09:46 +0000 (0:00:01.155) 0:00:28.305 ******** 2026-03-26 03:09:49.448187 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:09:49.448194 | orchestrator | 2026-03-26 03:09:49.448202 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2026-03-26 03:09:49.448210 | orchestrator | Thursday 26 March 2026 03:09:46 +0000 (0:00:00.371) 0:00:28.677 ******** 2026-03-26 03:09:49.448217 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:09:49.448225 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:09:49.448232 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:09:49.448240 | orchestrator | 2026-03-26 03:09:49.448247 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-03-26 03:09:49.448255 | orchestrator | Thursday 26 March 2026 03:09:47 +0000 (0:00:00.350) 0:00:29.028 ******** 2026-03-26 03:09:49.448262 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 03:09:49.448270 | orchestrator | 2026-03-26 03:09:49.448277 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2026-03-26 03:09:49.448285 | orchestrator | Thursday 26 March 2026 03:09:47 +0000 (0:00:00.583) 0:00:29.611 ******** 2026-03-26 03:09:49.448292 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-26 03:09:49.448307 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-26 03:09:52.489771 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-26 03:09:52.489857 | orchestrator | 2026-03-26 03:09:52.489868 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2026-03-26 03:09:52.489877 | orchestrator | Thursday 26 March 2026 03:09:49 +0000 (0:00:01.759) 0:00:31.371 ******** 2026-03-26 03:09:52.489887 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-26 03:09:52.489895 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:09:52.489904 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-26 03:09:52.489912 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:09:52.489919 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-26 03:09:52.489946 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:09:52.489954 | orchestrator | 2026-03-26 03:09:52.489961 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2026-03-26 03:09:52.489982 | orchestrator | Thursday 26 March 2026 03:09:50 +0000 (0:00:00.604) 0:00:31.975 ******** 2026-03-26 03:09:52.489994 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-26 03:09:52.490003 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:09:52.490011 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-26 03:09:52.490070 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:09:52.490078 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-26 03:09:52.490086 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:09:52.490094 | orchestrator | 2026-03-26 03:09:52.490102 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2026-03-26 03:09:52.490110 | orchestrator | Thursday 26 March 2026 03:09:50 +0000 (0:00:00.753) 0:00:32.729 ******** 2026-03-26 03:09:52.490118 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-26 03:09:52.490189 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-26 03:09:59.592291 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-26 03:09:59.593419 | orchestrator | 2026-03-26 03:09:59.593500 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2026-03-26 03:09:59.593526 | orchestrator | Thursday 26 March 2026 03:09:52 +0000 (0:00:01.693) 0:00:34.423 ******** 2026-03-26 03:09:59.593549 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-26 03:09:59.593572 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-26 03:09:59.593643 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-26 03:09:59.593665 | orchestrator | 2026-03-26 03:09:59.593685 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2026-03-26 03:09:59.593705 | orchestrator | Thursday 26 March 2026 03:09:54 +0000 (0:00:02.350) 0:00:36.773 ******** 2026-03-26 03:09:59.593748 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-03-26 03:09:59.593771 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-03-26 03:09:59.593791 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-03-26 03:09:59.593811 | orchestrator | 2026-03-26 03:09:59.593830 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2026-03-26 03:09:59.593848 | orchestrator | Thursday 26 March 2026 03:09:56 +0000 (0:00:01.519) 0:00:38.293 ******** 2026-03-26 03:09:59.593868 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:09:59.593889 | orchestrator | changed: [testbed-node-1] 2026-03-26 03:09:59.593908 | orchestrator | changed: [testbed-node-2] 2026-03-26 03:09:59.593928 | orchestrator | 2026-03-26 03:09:59.593947 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2026-03-26 03:09:59.593966 | orchestrator | Thursday 26 March 2026 03:09:57 +0000 (0:00:01.311) 0:00:39.605 ******** 2026-03-26 03:09:59.593988 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-26 03:09:59.594009 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:09:59.594106 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-26 03:09:59.594139 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:09:59.594158 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-26 03:09:59.594317 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:09:59.594372 | orchestrator | 2026-03-26 03:09:59.594392 | orchestrator | TASK [placement : Check placement containers] ********************************** 2026-03-26 03:09:59.594422 | orchestrator | Thursday 26 March 2026 03:09:58 +0000 (0:00:00.803) 0:00:40.408 ******** 2026-03-26 03:09:59.594462 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-26 03:10:29.266003 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-26 03:10:29.266199 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-26 03:10:29.266218 | orchestrator | 2026-03-26 03:10:29.266231 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2026-03-26 03:10:29.266245 | orchestrator | Thursday 26 March 2026 03:09:59 +0000 (0:00:01.119) 0:00:41.528 ******** 2026-03-26 03:10:29.266256 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:10:29.266269 | orchestrator | 2026-03-26 03:10:29.266294 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2026-03-26 03:10:29.266305 | orchestrator | Thursday 26 March 2026 03:10:01 +0000 (0:00:02.081) 0:00:43.609 ******** 2026-03-26 03:10:29.266317 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:10:29.266336 | orchestrator | 2026-03-26 03:10:29.266442 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2026-03-26 03:10:29.266463 | orchestrator | Thursday 26 March 2026 03:10:04 +0000 (0:00:02.366) 0:00:45.976 ******** 2026-03-26 03:10:29.266481 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:10:29.266500 | orchestrator | 2026-03-26 03:10:29.266518 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-03-26 03:10:29.266537 | orchestrator | Thursday 26 March 2026 03:10:18 +0000 (0:00:14.182) 0:01:00.158 ******** 2026-03-26 03:10:29.266556 | orchestrator | 2026-03-26 03:10:29.266576 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-03-26 03:10:29.266597 | orchestrator | Thursday 26 March 2026 03:10:18 +0000 (0:00:00.076) 0:01:00.234 ******** 2026-03-26 03:10:29.266617 | orchestrator | 2026-03-26 03:10:29.266638 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-03-26 03:10:29.266659 | orchestrator | Thursday 26 March 2026 03:10:18 +0000 (0:00:00.074) 0:01:00.308 ******** 2026-03-26 03:10:29.266673 | orchestrator | 2026-03-26 03:10:29.266686 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2026-03-26 03:10:29.266698 | orchestrator | Thursday 26 March 2026 03:10:18 +0000 (0:00:00.071) 0:01:00.380 ******** 2026-03-26 03:10:29.266711 | orchestrator | changed: [testbed-node-2] 2026-03-26 03:10:29.266740 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:10:29.266752 | orchestrator | changed: [testbed-node-1] 2026-03-26 03:10:29.266765 | orchestrator | 2026-03-26 03:10:29.266777 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-26 03:10:29.266832 | orchestrator | testbed-node-0 : ok=21  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-26 03:10:29.266847 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-26 03:10:29.266861 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-26 03:10:29.266874 | orchestrator | 2026-03-26 03:10:29.266887 | orchestrator | 2026-03-26 03:10:29.266900 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-26 03:10:29.266913 | orchestrator | Thursday 26 March 2026 03:10:28 +0000 (0:00:10.449) 0:01:10.829 ******** 2026-03-26 03:10:29.266938 | orchestrator | =============================================================================== 2026-03-26 03:10:29.266949 | orchestrator | placement : Running placement bootstrap container ---------------------- 14.18s 2026-03-26 03:10:29.266983 | orchestrator | placement : Restart placement-api container ---------------------------- 10.45s 2026-03-26 03:10:29.266995 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.44s 2026-03-26 03:10:29.267007 | orchestrator | service-ks-register : placement | Creating users ------------------------ 4.25s 2026-03-26 03:10:29.267018 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 4.22s 2026-03-26 03:10:29.267030 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.63s 2026-03-26 03:10:29.267041 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.55s 2026-03-26 03:10:29.267052 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.10s 2026-03-26 03:10:29.267063 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.37s 2026-03-26 03:10:29.267074 | orchestrator | placement : Copying over placement.conf --------------------------------- 2.35s 2026-03-26 03:10:29.267086 | orchestrator | placement : Creating placement databases -------------------------------- 2.08s 2026-03-26 03:10:29.267097 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.76s 2026-03-26 03:10:29.267108 | orchestrator | placement : Copying over config.json files for services ----------------- 1.69s 2026-03-26 03:10:29.267119 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.52s 2026-03-26 03:10:29.267145 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.31s 2026-03-26 03:10:29.267166 | orchestrator | placement : Ensuring config directories exist --------------------------- 1.16s 2026-03-26 03:10:29.267178 | orchestrator | placement : Check placement containers ---------------------------------- 1.12s 2026-03-26 03:10:29.267189 | orchestrator | placement : Copying over existing policy file --------------------------- 0.80s 2026-03-26 03:10:29.267200 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 0.75s 2026-03-26 03:10:29.267211 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS certificate --- 0.60s 2026-03-26 03:10:31.872311 | orchestrator | 2026-03-26 03:10:31 | INFO  | Task 0cd80ac4-a396-4ee1-bd88-7ddd0a9d3192 (neutron) was prepared for execution. 2026-03-26 03:10:31.872440 | orchestrator | 2026-03-26 03:10:31 | INFO  | It takes a moment until task 0cd80ac4-a396-4ee1-bd88-7ddd0a9d3192 (neutron) has been started and output is visible here. 2026-03-26 03:11:21.056483 | orchestrator | 2026-03-26 03:11:21.056662 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-26 03:11:21.056683 | orchestrator | 2026-03-26 03:11:21.056696 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-26 03:11:21.056708 | orchestrator | Thursday 26 March 2026 03:10:36 +0000 (0:00:00.296) 0:00:00.296 ******** 2026-03-26 03:11:21.056719 | orchestrator | ok: [testbed-node-0] 2026-03-26 03:11:21.056732 | orchestrator | ok: [testbed-node-1] 2026-03-26 03:11:21.056743 | orchestrator | ok: [testbed-node-2] 2026-03-26 03:11:21.056754 | orchestrator | ok: [testbed-node-3] 2026-03-26 03:11:21.056765 | orchestrator | ok: [testbed-node-4] 2026-03-26 03:11:21.056776 | orchestrator | ok: [testbed-node-5] 2026-03-26 03:11:21.056787 | orchestrator | 2026-03-26 03:11:21.056799 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-26 03:11:21.056810 | orchestrator | Thursday 26 March 2026 03:10:37 +0000 (0:00:00.730) 0:00:01.027 ******** 2026-03-26 03:11:21.056821 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2026-03-26 03:11:21.056846 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2026-03-26 03:11:21.056858 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2026-03-26 03:11:21.056870 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2026-03-26 03:11:21.056881 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2026-03-26 03:11:21.056917 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2026-03-26 03:11:21.056928 | orchestrator | 2026-03-26 03:11:21.056939 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2026-03-26 03:11:21.056950 | orchestrator | 2026-03-26 03:11:21.056961 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-26 03:11:21.056975 | orchestrator | Thursday 26 March 2026 03:10:37 +0000 (0:00:00.670) 0:00:01.697 ******** 2026-03-26 03:11:21.057004 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-26 03:11:21.057018 | orchestrator | 2026-03-26 03:11:21.057030 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2026-03-26 03:11:21.057043 | orchestrator | Thursday 26 March 2026 03:10:39 +0000 (0:00:01.487) 0:00:03.185 ******** 2026-03-26 03:11:21.057057 | orchestrator | ok: [testbed-node-1] 2026-03-26 03:11:21.057070 | orchestrator | ok: [testbed-node-0] 2026-03-26 03:11:21.057083 | orchestrator | ok: [testbed-node-2] 2026-03-26 03:11:21.057095 | orchestrator | ok: [testbed-node-3] 2026-03-26 03:11:21.057108 | orchestrator | ok: [testbed-node-4] 2026-03-26 03:11:21.057121 | orchestrator | ok: [testbed-node-5] 2026-03-26 03:11:21.057133 | orchestrator | 2026-03-26 03:11:21.057146 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2026-03-26 03:11:21.057159 | orchestrator | Thursday 26 March 2026 03:10:40 +0000 (0:00:01.369) 0:00:04.554 ******** 2026-03-26 03:11:21.057172 | orchestrator | ok: [testbed-node-1] 2026-03-26 03:11:21.057184 | orchestrator | ok: [testbed-node-0] 2026-03-26 03:11:21.057197 | orchestrator | ok: [testbed-node-2] 2026-03-26 03:11:21.057209 | orchestrator | ok: [testbed-node-3] 2026-03-26 03:11:21.057221 | orchestrator | ok: [testbed-node-4] 2026-03-26 03:11:21.057233 | orchestrator | ok: [testbed-node-5] 2026-03-26 03:11:21.057247 | orchestrator | 2026-03-26 03:11:21.057260 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2026-03-26 03:11:21.057273 | orchestrator | Thursday 26 March 2026 03:10:41 +0000 (0:00:01.138) 0:00:05.692 ******** 2026-03-26 03:11:21.057285 | orchestrator | ok: [testbed-node-0] => { 2026-03-26 03:11:21.057299 | orchestrator |  "changed": false, 2026-03-26 03:11:21.057311 | orchestrator |  "msg": "All assertions passed" 2026-03-26 03:11:21.057325 | orchestrator | } 2026-03-26 03:11:21.057336 | orchestrator | ok: [testbed-node-1] => { 2026-03-26 03:11:21.057347 | orchestrator |  "changed": false, 2026-03-26 03:11:21.057357 | orchestrator |  "msg": "All assertions passed" 2026-03-26 03:11:21.057368 | orchestrator | } 2026-03-26 03:11:21.057404 | orchestrator | ok: [testbed-node-2] => { 2026-03-26 03:11:21.057425 | orchestrator |  "changed": false, 2026-03-26 03:11:21.057438 | orchestrator |  "msg": "All assertions passed" 2026-03-26 03:11:21.057450 | orchestrator | } 2026-03-26 03:11:21.057460 | orchestrator | ok: [testbed-node-3] => { 2026-03-26 03:11:21.057471 | orchestrator |  "changed": false, 2026-03-26 03:11:21.057483 | orchestrator |  "msg": "All assertions passed" 2026-03-26 03:11:21.057493 | orchestrator | } 2026-03-26 03:11:21.057504 | orchestrator | ok: [testbed-node-4] => { 2026-03-26 03:11:21.057515 | orchestrator |  "changed": false, 2026-03-26 03:11:21.057527 | orchestrator |  "msg": "All assertions passed" 2026-03-26 03:11:21.057538 | orchestrator | } 2026-03-26 03:11:21.057549 | orchestrator | ok: [testbed-node-5] => { 2026-03-26 03:11:21.057560 | orchestrator |  "changed": false, 2026-03-26 03:11:21.057571 | orchestrator |  "msg": "All assertions passed" 2026-03-26 03:11:21.057582 | orchestrator | } 2026-03-26 03:11:21.057593 | orchestrator | 2026-03-26 03:11:21.057604 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2026-03-26 03:11:21.057615 | orchestrator | Thursday 26 March 2026 03:10:42 +0000 (0:00:00.915) 0:00:06.608 ******** 2026-03-26 03:11:21.057626 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:11:21.057637 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:11:21.057648 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:11:21.057668 | orchestrator | skipping: [testbed-node-3] 2026-03-26 03:11:21.057679 | orchestrator | skipping: [testbed-node-4] 2026-03-26 03:11:21.057690 | orchestrator | skipping: [testbed-node-5] 2026-03-26 03:11:21.057701 | orchestrator | 2026-03-26 03:11:21.057712 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2026-03-26 03:11:21.057723 | orchestrator | Thursday 26 March 2026 03:10:43 +0000 (0:00:00.645) 0:00:07.254 ******** 2026-03-26 03:11:21.057734 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2026-03-26 03:11:21.057745 | orchestrator | 2026-03-26 03:11:21.057756 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2026-03-26 03:11:21.057767 | orchestrator | Thursday 26 March 2026 03:10:47 +0000 (0:00:03.757) 0:00:11.012 ******** 2026-03-26 03:11:21.057777 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2026-03-26 03:11:21.057790 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2026-03-26 03:11:21.057801 | orchestrator | 2026-03-26 03:11:21.057831 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2026-03-26 03:11:21.057842 | orchestrator | Thursday 26 March 2026 03:10:53 +0000 (0:00:06.386) 0:00:17.399 ******** 2026-03-26 03:11:21.057853 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-26 03:11:21.057865 | orchestrator | 2026-03-26 03:11:21.057876 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2026-03-26 03:11:21.057887 | orchestrator | Thursday 26 March 2026 03:10:56 +0000 (0:00:03.132) 0:00:20.531 ******** 2026-03-26 03:11:21.057898 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-26 03:11:21.057909 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2026-03-26 03:11:21.057920 | orchestrator | 2026-03-26 03:11:21.057931 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2026-03-26 03:11:21.057942 | orchestrator | Thursday 26 March 2026 03:11:00 +0000 (0:00:03.842) 0:00:24.373 ******** 2026-03-26 03:11:21.057953 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-26 03:11:21.057964 | orchestrator | 2026-03-26 03:11:21.057975 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2026-03-26 03:11:21.057986 | orchestrator | Thursday 26 March 2026 03:11:03 +0000 (0:00:03.090) 0:00:27.464 ******** 2026-03-26 03:11:21.057997 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2026-03-26 03:11:21.058008 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2026-03-26 03:11:21.058084 | orchestrator | 2026-03-26 03:11:21.058096 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-26 03:11:21.058107 | orchestrator | Thursday 26 March 2026 03:11:11 +0000 (0:00:08.310) 0:00:35.774 ******** 2026-03-26 03:11:21.058118 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:11:21.058130 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:11:21.058141 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:11:21.058152 | orchestrator | skipping: [testbed-node-3] 2026-03-26 03:11:21.058163 | orchestrator | skipping: [testbed-node-4] 2026-03-26 03:11:21.058181 | orchestrator | skipping: [testbed-node-5] 2026-03-26 03:11:21.058193 | orchestrator | 2026-03-26 03:11:21.058204 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2026-03-26 03:11:21.058215 | orchestrator | Thursday 26 March 2026 03:11:12 +0000 (0:00:00.839) 0:00:36.613 ******** 2026-03-26 03:11:21.058226 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:11:21.058237 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:11:21.058248 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:11:21.058259 | orchestrator | skipping: [testbed-node-3] 2026-03-26 03:11:21.058270 | orchestrator | skipping: [testbed-node-4] 2026-03-26 03:11:21.058281 | orchestrator | skipping: [testbed-node-5] 2026-03-26 03:11:21.058292 | orchestrator | 2026-03-26 03:11:21.058303 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2026-03-26 03:11:21.058314 | orchestrator | Thursday 26 March 2026 03:11:14 +0000 (0:00:02.199) 0:00:38.813 ******** 2026-03-26 03:11:21.058334 | orchestrator | ok: [testbed-node-0] 2026-03-26 03:11:21.058346 | orchestrator | ok: [testbed-node-1] 2026-03-26 03:11:21.058357 | orchestrator | ok: [testbed-node-2] 2026-03-26 03:11:21.058368 | orchestrator | ok: [testbed-node-3] 2026-03-26 03:11:21.058402 | orchestrator | ok: [testbed-node-4] 2026-03-26 03:11:21.058415 | orchestrator | ok: [testbed-node-5] 2026-03-26 03:11:21.058426 | orchestrator | 2026-03-26 03:11:21.058437 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-03-26 03:11:21.058448 | orchestrator | Thursday 26 March 2026 03:11:16 +0000 (0:00:01.211) 0:00:40.024 ******** 2026-03-26 03:11:21.058459 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:11:21.058470 | orchestrator | skipping: [testbed-node-3] 2026-03-26 03:11:21.058481 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:11:21.058492 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:11:21.058503 | orchestrator | skipping: [testbed-node-5] 2026-03-26 03:11:21.058513 | orchestrator | skipping: [testbed-node-4] 2026-03-26 03:11:21.058524 | orchestrator | 2026-03-26 03:11:21.058535 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2026-03-26 03:11:21.058546 | orchestrator | Thursday 26 March 2026 03:11:18 +0000 (0:00:02.297) 0:00:42.322 ******** 2026-03-26 03:11:21.058561 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-26 03:11:21.058588 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-26 03:11:26.781271 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-26 03:11:26.781472 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-26 03:11:26.781538 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-26 03:11:26.781562 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-26 03:11:26.781581 | orchestrator | 2026-03-26 03:11:26.781602 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2026-03-26 03:11:26.781624 | orchestrator | Thursday 26 March 2026 03:11:21 +0000 (0:00:02.642) 0:00:44.965 ******** 2026-03-26 03:11:26.781642 | orchestrator | [WARNING]: Skipped 2026-03-26 03:11:26.781662 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2026-03-26 03:11:26.781682 | orchestrator | due to this access issue: 2026-03-26 03:11:26.781702 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2026-03-26 03:11:26.781722 | orchestrator | a directory 2026-03-26 03:11:26.781738 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-26 03:11:26.781757 | orchestrator | 2026-03-26 03:11:26.781775 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-26 03:11:26.781794 | orchestrator | Thursday 26 March 2026 03:11:21 +0000 (0:00:00.854) 0:00:45.819 ******** 2026-03-26 03:11:26.781813 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-26 03:11:26.781889 | orchestrator | 2026-03-26 03:11:26.781904 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2026-03-26 03:11:26.781940 | orchestrator | Thursday 26 March 2026 03:11:23 +0000 (0:00:01.359) 0:00:47.179 ******** 2026-03-26 03:11:26.781964 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-26 03:11:26.781992 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-26 03:11:26.782006 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-26 03:11:26.782074 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-26 03:11:26.782097 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-26 03:11:32.063238 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-26 03:11:32.063350 | orchestrator | 2026-03-26 03:11:32.063369 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2026-03-26 03:11:32.063411 | orchestrator | Thursday 26 March 2026 03:11:26 +0000 (0:00:03.511) 0:00:50.690 ******** 2026-03-26 03:11:32.063429 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-26 03:11:32.063443 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:11:32.063458 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-26 03:11:32.063471 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:11:32.063484 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-26 03:11:32.063496 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:11:32.063554 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-26 03:11:32.063570 | orchestrator | skipping: [testbed-node-4] 2026-03-26 03:11:32.063592 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-26 03:11:32.063601 | orchestrator | skipping: [testbed-node-3] 2026-03-26 03:11:32.063609 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-26 03:11:32.063617 | orchestrator | skipping: [testbed-node-5] 2026-03-26 03:11:32.063624 | orchestrator | 2026-03-26 03:11:32.063632 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2026-03-26 03:11:32.063639 | orchestrator | Thursday 26 March 2026 03:11:28 +0000 (0:00:02.225) 0:00:52.916 ******** 2026-03-26 03:11:32.063647 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-26 03:11:32.063655 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:11:32.063668 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-26 03:11:37.983760 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:11:37.983854 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-26 03:11:37.983863 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:11:37.983870 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-26 03:11:37.983875 | orchestrator | skipping: [testbed-node-3] 2026-03-26 03:11:37.983880 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-26 03:11:37.983884 | orchestrator | skipping: [testbed-node-4] 2026-03-26 03:11:37.983888 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-26 03:11:37.983907 | orchestrator | skipping: [testbed-node-5] 2026-03-26 03:11:37.983911 | orchestrator | 2026-03-26 03:11:37.983916 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2026-03-26 03:11:37.983932 | orchestrator | Thursday 26 March 2026 03:11:32 +0000 (0:00:03.054) 0:00:55.971 ******** 2026-03-26 03:11:37.983937 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:11:37.983941 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:11:37.983947 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:11:37.983953 | orchestrator | skipping: [testbed-node-3] 2026-03-26 03:11:37.983960 | orchestrator | skipping: [testbed-node-4] 2026-03-26 03:11:37.983966 | orchestrator | skipping: [testbed-node-5] 2026-03-26 03:11:37.983976 | orchestrator | 2026-03-26 03:11:37.983984 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2026-03-26 03:11:37.983990 | orchestrator | Thursday 26 March 2026 03:11:34 +0000 (0:00:02.514) 0:00:58.485 ******** 2026-03-26 03:11:37.983996 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:11:37.984003 | orchestrator | 2026-03-26 03:11:37.984009 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2026-03-26 03:11:37.984028 | orchestrator | Thursday 26 March 2026 03:11:34 +0000 (0:00:00.171) 0:00:58.656 ******** 2026-03-26 03:11:37.984035 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:11:37.984041 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:11:37.984047 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:11:37.984054 | orchestrator | skipping: [testbed-node-3] 2026-03-26 03:11:37.984060 | orchestrator | skipping: [testbed-node-4] 2026-03-26 03:11:37.984067 | orchestrator | skipping: [testbed-node-5] 2026-03-26 03:11:37.984073 | orchestrator | 2026-03-26 03:11:37.984079 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2026-03-26 03:11:37.984085 | orchestrator | Thursday 26 March 2026 03:11:35 +0000 (0:00:00.663) 0:00:59.319 ******** 2026-03-26 03:11:37.984097 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-26 03:11:37.984104 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:11:37.984111 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-26 03:11:37.984125 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:11:37.984134 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-26 03:11:37.984146 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:11:37.984163 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-26 03:11:37.984170 | orchestrator | skipping: [testbed-node-3] 2026-03-26 03:11:37.984194 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-26 03:11:47.813388 | orchestrator | skipping: [testbed-node-5] 2026-03-26 03:11:47.813592 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-26 03:11:47.813621 | orchestrator | skipping: [testbed-node-4] 2026-03-26 03:11:47.813639 | orchestrator | 2026-03-26 03:11:47.813656 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2026-03-26 03:11:47.813675 | orchestrator | Thursday 26 March 2026 03:11:37 +0000 (0:00:02.568) 0:01:01.888 ******** 2026-03-26 03:11:47.813692 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-26 03:11:47.813745 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-26 03:11:47.813767 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-26 03:11:47.813830 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-26 03:11:47.813854 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-26 03:11:47.813887 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-26 03:11:47.813906 | orchestrator | 2026-03-26 03:11:47.813924 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2026-03-26 03:11:47.813941 | orchestrator | Thursday 26 March 2026 03:11:41 +0000 (0:00:03.393) 0:01:05.281 ******** 2026-03-26 03:11:47.813959 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-26 03:11:47.813978 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-26 03:11:47.814093 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-26 03:11:52.591698 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-26 03:11:52.591832 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-26 03:11:52.591854 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-26 03:11:52.591866 | orchestrator | 2026-03-26 03:11:52.591878 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2026-03-26 03:11:52.591891 | orchestrator | Thursday 26 March 2026 03:11:47 +0000 (0:00:06.438) 0:01:11.719 ******** 2026-03-26 03:11:52.591902 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-26 03:11:52.591927 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:11:52.591986 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-26 03:11:52.592008 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:11:52.592019 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-26 03:11:52.592026 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:11:52.592032 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-26 03:11:52.592038 | orchestrator | skipping: [testbed-node-3] 2026-03-26 03:11:52.592044 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-26 03:11:52.592050 | orchestrator | skipping: [testbed-node-4] 2026-03-26 03:11:52.592062 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-26 03:11:52.592068 | orchestrator | skipping: [testbed-node-5] 2026-03-26 03:11:52.592074 | orchestrator | 2026-03-26 03:11:52.592081 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2026-03-26 03:11:52.592092 | orchestrator | Thursday 26 March 2026 03:11:49 +0000 (0:00:02.044) 0:01:13.764 ******** 2026-03-26 03:11:52.592098 | orchestrator | skipping: [testbed-node-5] 2026-03-26 03:11:52.592104 | orchestrator | skipping: [testbed-node-3] 2026-03-26 03:11:52.592110 | orchestrator | skipping: [testbed-node-4] 2026-03-26 03:11:52.592116 | orchestrator | changed: [testbed-node-2] 2026-03-26 03:11:52.592122 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:11:52.592137 | orchestrator | changed: [testbed-node-1] 2026-03-26 03:12:13.799105 | orchestrator | 2026-03-26 03:12:13.799199 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2026-03-26 03:12:13.799213 | orchestrator | Thursday 26 March 2026 03:11:52 +0000 (0:00:02.729) 0:01:16.493 ******** 2026-03-26 03:12:13.799224 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-26 03:12:13.799235 | orchestrator | skipping: [testbed-node-5] 2026-03-26 03:12:13.799244 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-26 03:12:13.799251 | orchestrator | skipping: [testbed-node-4] 2026-03-26 03:12:13.799256 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-26 03:12:13.799260 | orchestrator | skipping: [testbed-node-3] 2026-03-26 03:12:13.799265 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-26 03:12:13.799317 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-26 03:12:13.799327 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-26 03:12:13.799334 | orchestrator | 2026-03-26 03:12:13.799340 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2026-03-26 03:12:13.799347 | orchestrator | Thursday 26 March 2026 03:11:56 +0000 (0:00:03.582) 0:01:20.076 ******** 2026-03-26 03:12:13.799353 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:12:13.799361 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:12:13.799367 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:12:13.799373 | orchestrator | skipping: [testbed-node-4] 2026-03-26 03:12:13.799379 | orchestrator | skipping: [testbed-node-5] 2026-03-26 03:12:13.799385 | orchestrator | skipping: [testbed-node-3] 2026-03-26 03:12:13.799391 | orchestrator | 2026-03-26 03:12:13.799397 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2026-03-26 03:12:13.799402 | orchestrator | Thursday 26 March 2026 03:11:58 +0000 (0:00:02.306) 0:01:22.383 ******** 2026-03-26 03:12:13.799409 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:12:13.799463 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:12:13.799470 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:12:13.799476 | orchestrator | skipping: [testbed-node-3] 2026-03-26 03:12:13.799482 | orchestrator | skipping: [testbed-node-4] 2026-03-26 03:12:13.799488 | orchestrator | skipping: [testbed-node-5] 2026-03-26 03:12:13.799493 | orchestrator | 2026-03-26 03:12:13.799497 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2026-03-26 03:12:13.799501 | orchestrator | Thursday 26 March 2026 03:12:00 +0000 (0:00:02.333) 0:01:24.717 ******** 2026-03-26 03:12:13.799505 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:12:13.799509 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:12:13.799513 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:12:13.799517 | orchestrator | skipping: [testbed-node-3] 2026-03-26 03:12:13.799521 | orchestrator | skipping: [testbed-node-4] 2026-03-26 03:12:13.799525 | orchestrator | skipping: [testbed-node-5] 2026-03-26 03:12:13.799529 | orchestrator | 2026-03-26 03:12:13.799532 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2026-03-26 03:12:13.799544 | orchestrator | Thursday 26 March 2026 03:12:03 +0000 (0:00:02.410) 0:01:27.127 ******** 2026-03-26 03:12:13.799547 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:12:13.799551 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:12:13.799555 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:12:13.799559 | orchestrator | skipping: [testbed-node-3] 2026-03-26 03:12:13.799563 | orchestrator | skipping: [testbed-node-5] 2026-03-26 03:12:13.799567 | orchestrator | skipping: [testbed-node-4] 2026-03-26 03:12:13.799571 | orchestrator | 2026-03-26 03:12:13.799644 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2026-03-26 03:12:13.799650 | orchestrator | Thursday 26 March 2026 03:12:05 +0000 (0:00:02.633) 0:01:29.760 ******** 2026-03-26 03:12:13.799656 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:12:13.799661 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:12:13.799666 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:12:13.799673 | orchestrator | skipping: [testbed-node-4] 2026-03-26 03:12:13.799679 | orchestrator | skipping: [testbed-node-3] 2026-03-26 03:12:13.799684 | orchestrator | skipping: [testbed-node-5] 2026-03-26 03:12:13.799689 | orchestrator | 2026-03-26 03:12:13.799695 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2026-03-26 03:12:13.799701 | orchestrator | Thursday 26 March 2026 03:12:08 +0000 (0:00:02.619) 0:01:32.380 ******** 2026-03-26 03:12:13.799706 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:12:13.799712 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:12:13.799718 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:12:13.799723 | orchestrator | skipping: [testbed-node-3] 2026-03-26 03:12:13.799737 | orchestrator | skipping: [testbed-node-4] 2026-03-26 03:12:13.799743 | orchestrator | skipping: [testbed-node-5] 2026-03-26 03:12:13.799750 | orchestrator | 2026-03-26 03:12:13.799756 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2026-03-26 03:12:13.799761 | orchestrator | Thursday 26 March 2026 03:12:10 +0000 (0:00:02.367) 0:01:34.748 ******** 2026-03-26 03:12:13.799768 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-26 03:12:13.799775 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:12:13.799781 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-26 03:12:13.799787 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:12:13.799793 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-26 03:12:13.799809 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:12:18.725049 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-26 03:12:18.725156 | orchestrator | skipping: [testbed-node-5] 2026-03-26 03:12:18.725170 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-26 03:12:18.725180 | orchestrator | skipping: [testbed-node-3] 2026-03-26 03:12:18.725191 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-26 03:12:18.725200 | orchestrator | skipping: [testbed-node-4] 2026-03-26 03:12:18.725210 | orchestrator | 2026-03-26 03:12:18.725221 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2026-03-26 03:12:18.725231 | orchestrator | Thursday 26 March 2026 03:12:13 +0000 (0:00:02.958) 0:01:37.706 ******** 2026-03-26 03:12:18.725245 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-26 03:12:18.725280 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-26 03:12:18.725291 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:12:18.725301 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:12:18.725310 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-26 03:12:18.725320 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:12:18.725359 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-26 03:12:18.725371 | orchestrator | skipping: [testbed-node-3] 2026-03-26 03:12:18.725381 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-26 03:12:18.725401 | orchestrator | skipping: [testbed-node-4] 2026-03-26 03:12:18.725411 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-26 03:12:18.725443 | orchestrator | skipping: [testbed-node-5] 2026-03-26 03:12:18.725454 | orchestrator | 2026-03-26 03:12:18.725463 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2026-03-26 03:12:18.725473 | orchestrator | Thursday 26 March 2026 03:12:16 +0000 (0:00:02.551) 0:01:40.257 ******** 2026-03-26 03:12:18.725483 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-26 03:12:18.725492 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:12:18.725506 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-26 03:12:18.725516 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:12:18.725535 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-26 03:12:46.130934 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:12:46.131053 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-26 03:12:46.131069 | orchestrator | skipping: [testbed-node-4] 2026-03-26 03:12:46.131077 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-26 03:12:46.131084 | orchestrator | skipping: [testbed-node-3] 2026-03-26 03:12:46.131091 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-26 03:12:46.131098 | orchestrator | skipping: [testbed-node-5] 2026-03-26 03:12:46.131104 | orchestrator | 2026-03-26 03:12:46.131111 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2026-03-26 03:12:46.131119 | orchestrator | Thursday 26 March 2026 03:12:18 +0000 (0:00:02.376) 0:01:42.633 ******** 2026-03-26 03:12:46.131125 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:12:46.131131 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:12:46.131137 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:12:46.131143 | orchestrator | skipping: [testbed-node-3] 2026-03-26 03:12:46.131150 | orchestrator | skipping: [testbed-node-4] 2026-03-26 03:12:46.131156 | orchestrator | skipping: [testbed-node-5] 2026-03-26 03:12:46.131162 | orchestrator | 2026-03-26 03:12:46.131186 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2026-03-26 03:12:46.131192 | orchestrator | Thursday 26 March 2026 03:12:20 +0000 (0:00:02.208) 0:01:44.842 ******** 2026-03-26 03:12:46.131198 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:12:46.131204 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:12:46.131210 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:12:46.131216 | orchestrator | changed: [testbed-node-3] 2026-03-26 03:12:46.131222 | orchestrator | changed: [testbed-node-4] 2026-03-26 03:12:46.131228 | orchestrator | changed: [testbed-node-5] 2026-03-26 03:12:46.131233 | orchestrator | 2026-03-26 03:12:46.131239 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2026-03-26 03:12:46.131266 | orchestrator | Thursday 26 March 2026 03:12:24 +0000 (0:00:03.895) 0:01:48.737 ******** 2026-03-26 03:12:46.131272 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:12:46.131278 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:12:46.131283 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:12:46.131289 | orchestrator | skipping: [testbed-node-3] 2026-03-26 03:12:46.131294 | orchestrator | skipping: [testbed-node-5] 2026-03-26 03:12:46.131300 | orchestrator | skipping: [testbed-node-4] 2026-03-26 03:12:46.131305 | orchestrator | 2026-03-26 03:12:46.131310 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2026-03-26 03:12:46.131316 | orchestrator | Thursday 26 March 2026 03:12:27 +0000 (0:00:02.286) 0:01:51.024 ******** 2026-03-26 03:12:46.131322 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:12:46.131328 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:12:46.131334 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:12:46.131340 | orchestrator | skipping: [testbed-node-3] 2026-03-26 03:12:46.131346 | orchestrator | skipping: [testbed-node-4] 2026-03-26 03:12:46.131353 | orchestrator | skipping: [testbed-node-5] 2026-03-26 03:12:46.131359 | orchestrator | 2026-03-26 03:12:46.131365 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2026-03-26 03:12:46.131393 | orchestrator | Thursday 26 March 2026 03:12:29 +0000 (0:00:02.426) 0:01:53.450 ******** 2026-03-26 03:12:46.131400 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:12:46.131406 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:12:46.131413 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:12:46.131419 | orchestrator | skipping: [testbed-node-3] 2026-03-26 03:12:46.131426 | orchestrator | skipping: [testbed-node-4] 2026-03-26 03:12:46.131490 | orchestrator | skipping: [testbed-node-5] 2026-03-26 03:12:46.131500 | orchestrator | 2026-03-26 03:12:46.131507 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2026-03-26 03:12:46.131513 | orchestrator | Thursday 26 March 2026 03:12:31 +0000 (0:00:02.371) 0:01:55.822 ******** 2026-03-26 03:12:46.131520 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:12:46.131527 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:12:46.131533 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:12:46.131539 | orchestrator | skipping: [testbed-node-3] 2026-03-26 03:12:46.131546 | orchestrator | skipping: [testbed-node-4] 2026-03-26 03:12:46.131552 | orchestrator | skipping: [testbed-node-5] 2026-03-26 03:12:46.131558 | orchestrator | 2026-03-26 03:12:46.131564 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2026-03-26 03:12:46.131571 | orchestrator | Thursday 26 March 2026 03:12:34 +0000 (0:00:02.504) 0:01:58.327 ******** 2026-03-26 03:12:46.131577 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:12:46.131583 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:12:46.131590 | orchestrator | skipping: [testbed-node-3] 2026-03-26 03:12:46.131596 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:12:46.131603 | orchestrator | skipping: [testbed-node-4] 2026-03-26 03:12:46.131610 | orchestrator | skipping: [testbed-node-5] 2026-03-26 03:12:46.131617 | orchestrator | 2026-03-26 03:12:46.131624 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2026-03-26 03:12:46.131631 | orchestrator | Thursday 26 March 2026 03:12:36 +0000 (0:00:02.256) 0:02:00.583 ******** 2026-03-26 03:12:46.131639 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:12:46.131646 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:12:46.131653 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:12:46.131660 | orchestrator | skipping: [testbed-node-3] 2026-03-26 03:12:46.131666 | orchestrator | skipping: [testbed-node-4] 2026-03-26 03:12:46.131673 | orchestrator | skipping: [testbed-node-5] 2026-03-26 03:12:46.131680 | orchestrator | 2026-03-26 03:12:46.131687 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2026-03-26 03:12:46.131693 | orchestrator | Thursday 26 March 2026 03:12:39 +0000 (0:00:02.555) 0:02:03.138 ******** 2026-03-26 03:12:46.131700 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:12:46.131719 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:12:46.131725 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:12:46.131732 | orchestrator | skipping: [testbed-node-3] 2026-03-26 03:12:46.131738 | orchestrator | skipping: [testbed-node-4] 2026-03-26 03:12:46.131744 | orchestrator | skipping: [testbed-node-5] 2026-03-26 03:12:46.131751 | orchestrator | 2026-03-26 03:12:46.131758 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2026-03-26 03:12:46.131764 | orchestrator | Thursday 26 March 2026 03:12:41 +0000 (0:00:02.440) 0:02:05.579 ******** 2026-03-26 03:12:46.131771 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-26 03:12:46.131779 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:12:46.131785 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-26 03:12:46.131792 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:12:46.131798 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-26 03:12:46.131805 | orchestrator | skipping: [testbed-node-5] 2026-03-26 03:12:46.131812 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-26 03:12:46.131819 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:12:46.131825 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-26 03:12:46.131831 | orchestrator | skipping: [testbed-node-4] 2026-03-26 03:12:46.131838 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-26 03:12:46.131851 | orchestrator | skipping: [testbed-node-3] 2026-03-26 03:12:46.131856 | orchestrator | 2026-03-26 03:12:46.131862 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2026-03-26 03:12:46.131867 | orchestrator | Thursday 26 March 2026 03:12:43 +0000 (0:00:01.931) 0:02:07.510 ******** 2026-03-26 03:12:46.131876 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-26 03:12:46.131884 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:12:46.131900 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-26 03:12:48.866961 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:12:48.867086 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-26 03:12:48.867099 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:12:48.867107 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-26 03:12:48.867114 | orchestrator | skipping: [testbed-node-3] 2026-03-26 03:12:48.867133 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-26 03:12:48.867145 | orchestrator | skipping: [testbed-node-4] 2026-03-26 03:12:48.867159 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-26 03:12:48.867168 | orchestrator | skipping: [testbed-node-5] 2026-03-26 03:12:48.867177 | orchestrator | 2026-03-26 03:12:48.867186 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2026-03-26 03:12:48.867196 | orchestrator | Thursday 26 March 2026 03:12:46 +0000 (0:00:02.528) 0:02:10.039 ******** 2026-03-26 03:12:48.867221 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-26 03:12:48.867243 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-26 03:12:48.867253 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-26 03:12:48.867269 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-26 03:12:48.867279 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-26 03:12:48.867300 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-26 03:15:10.979433 | orchestrator | 2026-03-26 03:15:10.979641 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-26 03:15:10.979679 | orchestrator | Thursday 26 March 2026 03:12:48 +0000 (0:00:02.731) 0:02:12.770 ******** 2026-03-26 03:15:10.979702 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:15:10.979724 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:15:10.979745 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:15:10.979764 | orchestrator | skipping: [testbed-node-3] 2026-03-26 03:15:10.979786 | orchestrator | skipping: [testbed-node-4] 2026-03-26 03:15:10.979842 | orchestrator | skipping: [testbed-node-5] 2026-03-26 03:15:10.979864 | orchestrator | 2026-03-26 03:15:10.979885 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2026-03-26 03:15:10.979907 | orchestrator | Thursday 26 March 2026 03:12:49 +0000 (0:00:00.822) 0:02:13.593 ******** 2026-03-26 03:15:10.979928 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:15:10.979947 | orchestrator | 2026-03-26 03:15:10.979968 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2026-03-26 03:15:10.979988 | orchestrator | Thursday 26 March 2026 03:12:51 +0000 (0:00:02.097) 0:02:15.690 ******** 2026-03-26 03:15:10.980007 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:15:10.980027 | orchestrator | 2026-03-26 03:15:10.980047 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2026-03-26 03:15:10.980068 | orchestrator | Thursday 26 March 2026 03:12:54 +0000 (0:00:02.247) 0:02:17.938 ******** 2026-03-26 03:15:10.980088 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:15:10.980108 | orchestrator | 2026-03-26 03:15:10.980128 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-26 03:15:10.980150 | orchestrator | Thursday 26 March 2026 03:13:37 +0000 (0:00:43.340) 0:03:01.278 ******** 2026-03-26 03:15:10.980171 | orchestrator | 2026-03-26 03:15:10.980189 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-26 03:15:10.980208 | orchestrator | Thursday 26 March 2026 03:13:37 +0000 (0:00:00.093) 0:03:01.372 ******** 2026-03-26 03:15:10.980227 | orchestrator | 2026-03-26 03:15:10.980245 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-26 03:15:10.980263 | orchestrator | Thursday 26 March 2026 03:13:37 +0000 (0:00:00.073) 0:03:01.446 ******** 2026-03-26 03:15:10.980311 | orchestrator | 2026-03-26 03:15:10.980329 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-26 03:15:10.980349 | orchestrator | Thursday 26 March 2026 03:13:37 +0000 (0:00:00.074) 0:03:01.520 ******** 2026-03-26 03:15:10.980368 | orchestrator | 2026-03-26 03:15:10.980410 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-26 03:15:10.980430 | orchestrator | Thursday 26 March 2026 03:13:37 +0000 (0:00:00.069) 0:03:01.590 ******** 2026-03-26 03:15:10.980449 | orchestrator | 2026-03-26 03:15:10.980468 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-26 03:15:10.980487 | orchestrator | Thursday 26 March 2026 03:13:37 +0000 (0:00:00.070) 0:03:01.661 ******** 2026-03-26 03:15:10.980503 | orchestrator | 2026-03-26 03:15:10.980514 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2026-03-26 03:15:10.980525 | orchestrator | Thursday 26 March 2026 03:13:37 +0000 (0:00:00.072) 0:03:01.733 ******** 2026-03-26 03:15:10.980561 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:15:10.980573 | orchestrator | changed: [testbed-node-1] 2026-03-26 03:15:10.980585 | orchestrator | changed: [testbed-node-2] 2026-03-26 03:15:10.980596 | orchestrator | 2026-03-26 03:15:10.980607 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2026-03-26 03:15:10.980618 | orchestrator | Thursday 26 March 2026 03:14:07 +0000 (0:00:30.095) 0:03:31.829 ******** 2026-03-26 03:15:10.980628 | orchestrator | changed: [testbed-node-3] 2026-03-26 03:15:10.980640 | orchestrator | changed: [testbed-node-4] 2026-03-26 03:15:10.980650 | orchestrator | changed: [testbed-node-5] 2026-03-26 03:15:10.980661 | orchestrator | 2026-03-26 03:15:10.980673 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-26 03:15:10.980685 | orchestrator | testbed-node-0 : ok=26  changed=15  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-26 03:15:10.980698 | orchestrator | testbed-node-1 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-03-26 03:15:10.980709 | orchestrator | testbed-node-2 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-03-26 03:15:10.980721 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-26 03:15:10.980732 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-26 03:15:10.980742 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-26 03:15:10.980753 | orchestrator | 2026-03-26 03:15:10.980764 | orchestrator | 2026-03-26 03:15:10.980775 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-26 03:15:10.980786 | orchestrator | Thursday 26 March 2026 03:15:10 +0000 (0:01:02.526) 0:04:34.356 ******** 2026-03-26 03:15:10.980797 | orchestrator | =============================================================================== 2026-03-26 03:15:10.980808 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 62.53s 2026-03-26 03:15:10.980819 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 43.34s 2026-03-26 03:15:10.980830 | orchestrator | neutron : Restart neutron-server container ----------------------------- 30.10s 2026-03-26 03:15:10.980866 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 8.31s 2026-03-26 03:15:10.980878 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 6.44s 2026-03-26 03:15:10.980888 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 6.39s 2026-03-26 03:15:10.980899 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 3.90s 2026-03-26 03:15:10.980910 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 3.84s 2026-03-26 03:15:10.980921 | orchestrator | service-ks-register : neutron | Creating services ----------------------- 3.76s 2026-03-26 03:15:10.980932 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 3.58s 2026-03-26 03:15:10.980943 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 3.51s 2026-03-26 03:15:10.980953 | orchestrator | neutron : Copying over config.json files for services ------------------- 3.39s 2026-03-26 03:15:10.980964 | orchestrator | service-ks-register : neutron | Creating projects ----------------------- 3.13s 2026-03-26 03:15:10.980975 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 3.09s 2026-03-26 03:15:10.980986 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 3.05s 2026-03-26 03:15:10.980996 | orchestrator | neutron : Copying over dnsmasq.conf ------------------------------------- 2.96s 2026-03-26 03:15:10.981015 | orchestrator | neutron : Check neutron containers -------------------------------------- 2.73s 2026-03-26 03:15:10.981026 | orchestrator | neutron : Copying over ssh key ------------------------------------------ 2.73s 2026-03-26 03:15:10.981037 | orchestrator | neutron : Ensuring config directories exist ----------------------------- 2.64s 2026-03-26 03:15:10.981048 | orchestrator | neutron : Copying over mlnx_agent.ini ----------------------------------- 2.63s 2026-03-26 03:15:13.617381 | orchestrator | 2026-03-26 03:15:13 | INFO  | Task bd11feb7-7d46-47e4-ae00-c4b2147a1148 (nova) was prepared for execution. 2026-03-26 03:15:13.617470 | orchestrator | 2026-03-26 03:15:13 | INFO  | It takes a moment until task bd11feb7-7d46-47e4-ae00-c4b2147a1148 (nova) has been started and output is visible here. 2026-03-26 03:17:12.333285 | orchestrator | 2026-03-26 03:17:12.333482 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-26 03:17:12.333512 | orchestrator | 2026-03-26 03:17:12.333524 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2026-03-26 03:17:12.333535 | orchestrator | Thursday 26 March 2026 03:15:18 +0000 (0:00:00.306) 0:00:00.306 ******** 2026-03-26 03:17:12.333545 | orchestrator | changed: [testbed-manager] 2026-03-26 03:17:12.333556 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:17:12.333566 | orchestrator | changed: [testbed-node-1] 2026-03-26 03:17:12.333576 | orchestrator | changed: [testbed-node-2] 2026-03-26 03:17:12.333586 | orchestrator | changed: [testbed-node-3] 2026-03-26 03:17:12.333596 | orchestrator | changed: [testbed-node-4] 2026-03-26 03:17:12.333606 | orchestrator | changed: [testbed-node-5] 2026-03-26 03:17:12.333616 | orchestrator | 2026-03-26 03:17:12.333627 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-26 03:17:12.333673 | orchestrator | Thursday 26 March 2026 03:15:19 +0000 (0:00:00.906) 0:00:01.213 ******** 2026-03-26 03:17:12.333695 | orchestrator | changed: [testbed-manager] 2026-03-26 03:17:12.333707 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:17:12.333719 | orchestrator | changed: [testbed-node-1] 2026-03-26 03:17:12.333730 | orchestrator | changed: [testbed-node-2] 2026-03-26 03:17:12.333742 | orchestrator | changed: [testbed-node-3] 2026-03-26 03:17:12.333759 | orchestrator | changed: [testbed-node-4] 2026-03-26 03:17:12.333775 | orchestrator | changed: [testbed-node-5] 2026-03-26 03:17:12.333787 | orchestrator | 2026-03-26 03:17:12.333798 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-26 03:17:12.333809 | orchestrator | Thursday 26 March 2026 03:15:20 +0000 (0:00:00.958) 0:00:02.172 ******** 2026-03-26 03:17:12.333821 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2026-03-26 03:17:12.333834 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2026-03-26 03:17:12.333845 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2026-03-26 03:17:12.333867 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2026-03-26 03:17:12.333879 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2026-03-26 03:17:12.333891 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2026-03-26 03:17:12.333902 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2026-03-26 03:17:12.333914 | orchestrator | 2026-03-26 03:17:12.334008 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2026-03-26 03:17:12.334086 | orchestrator | 2026-03-26 03:17:12.334099 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-03-26 03:17:12.334111 | orchestrator | Thursday 26 March 2026 03:15:20 +0000 (0:00:00.860) 0:00:03.033 ******** 2026-03-26 03:17:12.334122 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 03:17:12.334132 | orchestrator | 2026-03-26 03:17:12.334143 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2026-03-26 03:17:12.334153 | orchestrator | Thursday 26 March 2026 03:15:21 +0000 (0:00:00.851) 0:00:03.884 ******** 2026-03-26 03:17:12.334164 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2026-03-26 03:17:12.334198 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2026-03-26 03:17:12.334208 | orchestrator | 2026-03-26 03:17:12.334218 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2026-03-26 03:17:12.334228 | orchestrator | Thursday 26 March 2026 03:15:25 +0000 (0:00:04.046) 0:00:07.930 ******** 2026-03-26 03:17:12.334239 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-26 03:17:12.334249 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-26 03:17:12.334258 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:17:12.334268 | orchestrator | 2026-03-26 03:17:12.334278 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-03-26 03:17:12.334288 | orchestrator | Thursday 26 March 2026 03:15:29 +0000 (0:00:04.101) 0:00:12.032 ******** 2026-03-26 03:17:12.334298 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:17:12.334309 | orchestrator | 2026-03-26 03:17:12.334319 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2026-03-26 03:17:12.334329 | orchestrator | Thursday 26 March 2026 03:15:30 +0000 (0:00:00.681) 0:00:12.714 ******** 2026-03-26 03:17:12.334338 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:17:12.334348 | orchestrator | 2026-03-26 03:17:12.334358 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2026-03-26 03:17:12.334368 | orchestrator | Thursday 26 March 2026 03:15:31 +0000 (0:00:01.307) 0:00:14.021 ******** 2026-03-26 03:17:12.334378 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:17:12.334388 | orchestrator | 2026-03-26 03:17:12.334398 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-26 03:17:12.334408 | orchestrator | Thursday 26 March 2026 03:15:34 +0000 (0:00:02.696) 0:00:16.718 ******** 2026-03-26 03:17:12.334418 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:17:12.334428 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:17:12.334438 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:17:12.334448 | orchestrator | 2026-03-26 03:17:12.334457 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-03-26 03:17:12.334467 | orchestrator | Thursday 26 March 2026 03:15:34 +0000 (0:00:00.330) 0:00:17.048 ******** 2026-03-26 03:17:12.334477 | orchestrator | ok: [testbed-node-0] 2026-03-26 03:17:12.334487 | orchestrator | 2026-03-26 03:17:12.334497 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2026-03-26 03:17:12.334507 | orchestrator | Thursday 26 March 2026 03:16:07 +0000 (0:00:32.739) 0:00:49.788 ******** 2026-03-26 03:17:12.334517 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:17:12.334527 | orchestrator | 2026-03-26 03:17:12.334536 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-03-26 03:17:12.334546 | orchestrator | Thursday 26 March 2026 03:16:22 +0000 (0:00:14.467) 0:01:04.255 ******** 2026-03-26 03:17:12.334556 | orchestrator | ok: [testbed-node-0] 2026-03-26 03:17:12.334566 | orchestrator | 2026-03-26 03:17:12.334576 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-03-26 03:17:12.334586 | orchestrator | Thursday 26 March 2026 03:16:33 +0000 (0:00:11.499) 0:01:15.754 ******** 2026-03-26 03:17:12.334617 | orchestrator | ok: [testbed-node-0] 2026-03-26 03:17:12.334636 | orchestrator | 2026-03-26 03:17:12.334661 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2026-03-26 03:17:12.334678 | orchestrator | Thursday 26 March 2026 03:16:34 +0000 (0:00:00.705) 0:01:16.459 ******** 2026-03-26 03:17:12.334693 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:17:12.334707 | orchestrator | 2026-03-26 03:17:12.334723 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-26 03:17:12.334737 | orchestrator | Thursday 26 March 2026 03:16:34 +0000 (0:00:00.511) 0:01:16.970 ******** 2026-03-26 03:17:12.334753 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 03:17:12.334768 | orchestrator | 2026-03-26 03:17:12.334782 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-03-26 03:17:12.334811 | orchestrator | Thursday 26 March 2026 03:16:35 +0000 (0:00:00.753) 0:01:17.724 ******** 2026-03-26 03:17:12.334835 | orchestrator | ok: [testbed-node-0] 2026-03-26 03:17:12.334853 | orchestrator | 2026-03-26 03:17:12.334869 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-03-26 03:17:12.334885 | orchestrator | Thursday 26 March 2026 03:16:52 +0000 (0:00:17.279) 0:01:35.004 ******** 2026-03-26 03:17:12.334902 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:17:12.334918 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:17:12.334964 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:17:12.334979 | orchestrator | 2026-03-26 03:17:12.334994 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2026-03-26 03:17:12.335010 | orchestrator | 2026-03-26 03:17:12.335026 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-03-26 03:17:12.335042 | orchestrator | Thursday 26 March 2026 03:16:53 +0000 (0:00:00.364) 0:01:35.368 ******** 2026-03-26 03:17:12.335057 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 03:17:12.335073 | orchestrator | 2026-03-26 03:17:12.335088 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2026-03-26 03:17:12.335104 | orchestrator | Thursday 26 March 2026 03:16:54 +0000 (0:00:00.879) 0:01:36.248 ******** 2026-03-26 03:17:12.335120 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:17:12.335136 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:17:12.335154 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:17:12.335170 | orchestrator | 2026-03-26 03:17:12.335187 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2026-03-26 03:17:12.335203 | orchestrator | Thursday 26 March 2026 03:16:56 +0000 (0:00:02.001) 0:01:38.250 ******** 2026-03-26 03:17:12.335218 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:17:12.335234 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:17:12.335252 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:17:12.335268 | orchestrator | 2026-03-26 03:17:12.335285 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-03-26 03:17:12.335302 | orchestrator | Thursday 26 March 2026 03:16:58 +0000 (0:00:02.058) 0:01:40.309 ******** 2026-03-26 03:17:12.335317 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:17:12.335333 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:17:12.335349 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:17:12.335367 | orchestrator | 2026-03-26 03:17:12.335384 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-03-26 03:17:12.335400 | orchestrator | Thursday 26 March 2026 03:16:58 +0000 (0:00:00.579) 0:01:40.888 ******** 2026-03-26 03:17:12.335417 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-26 03:17:12.335432 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:17:12.335449 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-26 03:17:12.335466 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:17:12.335483 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-03-26 03:17:12.335499 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2026-03-26 03:17:12.335515 | orchestrator | 2026-03-26 03:17:12.335530 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-03-26 03:17:12.335547 | orchestrator | Thursday 26 March 2026 03:17:06 +0000 (0:00:07.383) 0:01:48.271 ******** 2026-03-26 03:17:12.335564 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:17:12.335581 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:17:12.335597 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:17:12.335613 | orchestrator | 2026-03-26 03:17:12.335629 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-03-26 03:17:12.335645 | orchestrator | Thursday 26 March 2026 03:17:06 +0000 (0:00:00.415) 0:01:48.686 ******** 2026-03-26 03:17:12.335661 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-26 03:17:12.335679 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:17:12.335696 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-26 03:17:12.335725 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:17:12.335740 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-26 03:17:12.335756 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:17:12.335772 | orchestrator | 2026-03-26 03:17:12.335788 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-03-26 03:17:12.335805 | orchestrator | Thursday 26 March 2026 03:17:08 +0000 (0:00:01.576) 0:01:50.262 ******** 2026-03-26 03:17:12.335822 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:17:12.335838 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:17:12.335854 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:17:12.335870 | orchestrator | 2026-03-26 03:17:12.335888 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2026-03-26 03:17:12.335906 | orchestrator | Thursday 26 March 2026 03:17:08 +0000 (0:00:00.527) 0:01:50.790 ******** 2026-03-26 03:17:12.335946 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:17:12.335963 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:17:12.335979 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:17:12.335995 | orchestrator | 2026-03-26 03:17:12.336010 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2026-03-26 03:17:12.336027 | orchestrator | Thursday 26 March 2026 03:17:09 +0000 (0:00:01.006) 0:01:51.796 ******** 2026-03-26 03:17:12.336043 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:17:12.336061 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:17:12.336094 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:18:30.805670 | orchestrator | 2026-03-26 03:18:30.805879 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2026-03-26 03:18:30.805899 | orchestrator | Thursday 26 March 2026 03:17:12 +0000 (0:00:02.636) 0:01:54.433 ******** 2026-03-26 03:18:30.805911 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:18:30.805922 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:18:30.805932 | orchestrator | ok: [testbed-node-0] 2026-03-26 03:18:30.805943 | orchestrator | 2026-03-26 03:18:30.805954 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-03-26 03:18:30.805964 | orchestrator | Thursday 26 March 2026 03:17:34 +0000 (0:00:21.755) 0:02:16.188 ******** 2026-03-26 03:18:30.805974 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:18:30.805984 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:18:30.805994 | orchestrator | ok: [testbed-node-0] 2026-03-26 03:18:30.806004 | orchestrator | 2026-03-26 03:18:30.806067 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-03-26 03:18:30.806079 | orchestrator | Thursday 26 March 2026 03:17:46 +0000 (0:00:12.347) 0:02:28.535 ******** 2026-03-26 03:18:30.806089 | orchestrator | ok: [testbed-node-0] 2026-03-26 03:18:30.806099 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:18:30.806109 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:18:30.806152 | orchestrator | 2026-03-26 03:18:30.806165 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2026-03-26 03:18:30.806176 | orchestrator | Thursday 26 March 2026 03:17:47 +0000 (0:00:01.198) 0:02:29.734 ******** 2026-03-26 03:18:30.806187 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:18:30.806199 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:18:30.806211 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:18:30.806222 | orchestrator | 2026-03-26 03:18:30.806233 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2026-03-26 03:18:30.806244 | orchestrator | Thursday 26 March 2026 03:18:00 +0000 (0:00:12.613) 0:02:42.347 ******** 2026-03-26 03:18:30.806255 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:18:30.806266 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:18:30.806277 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:18:30.806288 | orchestrator | 2026-03-26 03:18:30.806300 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-03-26 03:18:30.806323 | orchestrator | Thursday 26 March 2026 03:18:01 +0000 (0:00:01.116) 0:02:43.463 ******** 2026-03-26 03:18:30.806371 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:18:30.806384 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:18:30.806395 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:18:30.806406 | orchestrator | 2026-03-26 03:18:30.806418 | orchestrator | PLAY [Apply role nova] ********************************************************* 2026-03-26 03:18:30.806428 | orchestrator | 2026-03-26 03:18:30.806440 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-26 03:18:30.806451 | orchestrator | Thursday 26 March 2026 03:18:01 +0000 (0:00:00.339) 0:02:43.803 ******** 2026-03-26 03:18:30.806462 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 03:18:30.806474 | orchestrator | 2026-03-26 03:18:30.806484 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2026-03-26 03:18:30.806494 | orchestrator | Thursday 26 March 2026 03:18:02 +0000 (0:00:00.809) 0:02:44.612 ******** 2026-03-26 03:18:30.806504 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2026-03-26 03:18:30.806514 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2026-03-26 03:18:30.806524 | orchestrator | 2026-03-26 03:18:30.806534 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2026-03-26 03:18:30.806543 | orchestrator | Thursday 26 March 2026 03:18:05 +0000 (0:00:03.184) 0:02:47.797 ******** 2026-03-26 03:18:30.806553 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2026-03-26 03:18:30.806624 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2026-03-26 03:18:30.806636 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2026-03-26 03:18:30.806647 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2026-03-26 03:18:30.806658 | orchestrator | 2026-03-26 03:18:30.806668 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2026-03-26 03:18:30.806678 | orchestrator | Thursday 26 March 2026 03:18:11 +0000 (0:00:06.199) 0:02:53.997 ******** 2026-03-26 03:18:30.806688 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-26 03:18:30.806697 | orchestrator | 2026-03-26 03:18:30.806707 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2026-03-26 03:18:30.806717 | orchestrator | Thursday 26 March 2026 03:18:15 +0000 (0:00:03.242) 0:02:57.240 ******** 2026-03-26 03:18:30.806727 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-26 03:18:30.806762 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2026-03-26 03:18:30.806772 | orchestrator | 2026-03-26 03:18:30.806782 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2026-03-26 03:18:30.806792 | orchestrator | Thursday 26 March 2026 03:18:18 +0000 (0:00:03.849) 0:03:01.089 ******** 2026-03-26 03:18:30.806802 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-26 03:18:30.806811 | orchestrator | 2026-03-26 03:18:30.806821 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2026-03-26 03:18:30.806831 | orchestrator | Thursday 26 March 2026 03:18:22 +0000 (0:00:03.153) 0:03:04.243 ******** 2026-03-26 03:18:30.806840 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2026-03-26 03:18:30.806851 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2026-03-26 03:18:30.806860 | orchestrator | 2026-03-26 03:18:30.806873 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-03-26 03:18:30.806919 | orchestrator | Thursday 26 March 2026 03:18:29 +0000 (0:00:07.345) 0:03:11.588 ******** 2026-03-26 03:18:30.806943 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-26 03:18:30.807007 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-26 03:18:30.807030 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-26 03:18:30.807071 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-26 03:18:35.548437 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-26 03:18:35.548565 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-26 03:18:35.548583 | orchestrator | 2026-03-26 03:18:35.548597 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2026-03-26 03:18:35.548610 | orchestrator | Thursday 26 March 2026 03:18:30 +0000 (0:00:01.320) 0:03:12.909 ******** 2026-03-26 03:18:35.548622 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:18:35.548634 | orchestrator | 2026-03-26 03:18:35.548645 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2026-03-26 03:18:35.548657 | orchestrator | Thursday 26 March 2026 03:18:30 +0000 (0:00:00.157) 0:03:13.067 ******** 2026-03-26 03:18:35.548668 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:18:35.548679 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:18:35.548690 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:18:35.548701 | orchestrator | 2026-03-26 03:18:35.548712 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2026-03-26 03:18:35.548723 | orchestrator | Thursday 26 March 2026 03:18:31 +0000 (0:00:00.341) 0:03:13.408 ******** 2026-03-26 03:18:35.548815 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-26 03:18:35.548827 | orchestrator | 2026-03-26 03:18:35.548838 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2026-03-26 03:18:35.548849 | orchestrator | Thursday 26 March 2026 03:18:31 +0000 (0:00:00.688) 0:03:14.097 ******** 2026-03-26 03:18:35.548860 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:18:35.548871 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:18:35.548881 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:18:35.548892 | orchestrator | 2026-03-26 03:18:35.548904 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-26 03:18:35.548915 | orchestrator | Thursday 26 March 2026 03:18:32 +0000 (0:00:00.570) 0:03:14.668 ******** 2026-03-26 03:18:35.548926 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 03:18:35.548939 | orchestrator | 2026-03-26 03:18:35.548953 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-03-26 03:18:35.548965 | orchestrator | Thursday 26 March 2026 03:18:33 +0000 (0:00:00.595) 0:03:15.263 ******** 2026-03-26 03:18:35.549001 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-26 03:18:35.549093 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-26 03:18:35.549110 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-26 03:18:35.549122 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-26 03:18:35.549135 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-26 03:18:35.549173 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-26 03:18:35.549186 | orchestrator | 2026-03-26 03:18:35.549204 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-03-26 03:18:37.304424 | orchestrator | Thursday 26 March 2026 03:18:35 +0000 (0:00:02.389) 0:03:17.653 ******** 2026-03-26 03:18:37.304548 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-26 03:18:37.304574 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-26 03:18:37.304591 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:18:37.304607 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-26 03:18:37.304637 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-26 03:18:37.304663 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:18:37.304699 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-26 03:18:37.304716 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-26 03:18:37.304756 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:18:37.304771 | orchestrator | 2026-03-26 03:18:37.304784 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-03-26 03:18:37.304799 | orchestrator | Thursday 26 March 2026 03:18:36 +0000 (0:00:00.900) 0:03:18.554 ******** 2026-03-26 03:18:37.304812 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-26 03:18:37.304841 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-26 03:18:37.304855 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:18:37.304886 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-26 03:18:39.783450 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-26 03:18:39.783558 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:18:39.783579 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-26 03:18:39.783619 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-26 03:18:39.783632 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:18:39.783645 | orchestrator | 2026-03-26 03:18:39.783658 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2026-03-26 03:18:39.783671 | orchestrator | Thursday 26 March 2026 03:18:37 +0000 (0:00:00.859) 0:03:19.413 ******** 2026-03-26 03:18:39.783698 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-26 03:18:39.783811 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-26 03:18:39.783837 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-26 03:18:39.783861 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-26 03:18:39.783919 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-26 03:18:39.783944 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-26 03:18:46.610774 | orchestrator | 2026-03-26 03:18:46.610904 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2026-03-26 03:18:46.610926 | orchestrator | Thursday 26 March 2026 03:18:39 +0000 (0:00:02.478) 0:03:21.892 ******** 2026-03-26 03:18:46.610947 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-26 03:18:46.611054 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-26 03:18:46.611090 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-26 03:18:46.611125 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-26 03:18:46.611142 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-26 03:18:46.611168 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-26 03:18:46.611183 | orchestrator | 2026-03-26 03:18:46.611197 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2026-03-26 03:18:46.611212 | orchestrator | Thursday 26 March 2026 03:18:45 +0000 (0:00:06.206) 0:03:28.098 ******** 2026-03-26 03:18:46.611232 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-26 03:18:46.611250 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-26 03:18:46.611267 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:18:46.611315 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-26 03:18:51.162340 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-26 03:18:51.162417 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:18:51.162428 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-26 03:18:51.162448 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-26 03:18:51.162453 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:18:51.162459 | orchestrator | 2026-03-26 03:18:51.162465 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2026-03-26 03:18:51.162472 | orchestrator | Thursday 26 March 2026 03:18:46 +0000 (0:00:00.622) 0:03:28.721 ******** 2026-03-26 03:18:51.162477 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:18:51.162482 | orchestrator | changed: [testbed-node-1] 2026-03-26 03:18:51.162487 | orchestrator | changed: [testbed-node-2] 2026-03-26 03:18:51.162492 | orchestrator | 2026-03-26 03:18:51.162497 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2026-03-26 03:18:51.162502 | orchestrator | Thursday 26 March 2026 03:18:48 +0000 (0:00:01.647) 0:03:30.368 ******** 2026-03-26 03:18:51.162507 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:18:51.162512 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:18:51.162517 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:18:51.162522 | orchestrator | 2026-03-26 03:18:51.162527 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2026-03-26 03:18:51.162532 | orchestrator | Thursday 26 March 2026 03:18:48 +0000 (0:00:00.342) 0:03:30.710 ******** 2026-03-26 03:18:51.162549 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-26 03:18:51.162580 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-26 03:18:51.162591 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-26 03:18:51.162597 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-26 03:18:51.162608 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-26 03:18:51.162618 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-26 03:19:33.493478 | orchestrator | 2026-03-26 03:19:33.493689 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-03-26 03:19:33.493725 | orchestrator | Thursday 26 March 2026 03:18:50 +0000 (0:00:02.115) 0:03:32.826 ******** 2026-03-26 03:19:33.493745 | orchestrator | 2026-03-26 03:19:33.493764 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-03-26 03:19:33.493783 | orchestrator | Thursday 26 March 2026 03:18:50 +0000 (0:00:00.150) 0:03:32.977 ******** 2026-03-26 03:19:33.493801 | orchestrator | 2026-03-26 03:19:33.493819 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-03-26 03:19:33.493837 | orchestrator | Thursday 26 March 2026 03:18:50 +0000 (0:00:00.139) 0:03:33.117 ******** 2026-03-26 03:19:33.493853 | orchestrator | 2026-03-26 03:19:33.493872 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2026-03-26 03:19:33.493889 | orchestrator | Thursday 26 March 2026 03:18:51 +0000 (0:00:00.148) 0:03:33.265 ******** 2026-03-26 03:19:33.493909 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:19:33.493929 | orchestrator | changed: [testbed-node-2] 2026-03-26 03:19:33.493949 | orchestrator | changed: [testbed-node-1] 2026-03-26 03:19:33.493967 | orchestrator | 2026-03-26 03:19:33.493985 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2026-03-26 03:19:33.494005 | orchestrator | Thursday 26 March 2026 03:19:15 +0000 (0:00:23.876) 0:03:57.142 ******** 2026-03-26 03:19:33.494103 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:19:33.494124 | orchestrator | changed: [testbed-node-2] 2026-03-26 03:19:33.494143 | orchestrator | changed: [testbed-node-1] 2026-03-26 03:19:33.494162 | orchestrator | 2026-03-26 03:19:33.494180 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2026-03-26 03:19:33.494198 | orchestrator | 2026-03-26 03:19:33.494214 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-26 03:19:33.494231 | orchestrator | Thursday 26 March 2026 03:19:20 +0000 (0:00:05.424) 0:04:02.566 ******** 2026-03-26 03:19:33.494249 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 03:19:33.494266 | orchestrator | 2026-03-26 03:19:33.494282 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-26 03:19:33.494316 | orchestrator | Thursday 26 March 2026 03:19:21 +0000 (0:00:01.330) 0:04:03.896 ******** 2026-03-26 03:19:33.494333 | orchestrator | skipping: [testbed-node-3] 2026-03-26 03:19:33.494349 | orchestrator | skipping: [testbed-node-4] 2026-03-26 03:19:33.494364 | orchestrator | skipping: [testbed-node-5] 2026-03-26 03:19:33.494413 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:19:33.494431 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:19:33.494446 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:19:33.494462 | orchestrator | 2026-03-26 03:19:33.494478 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2026-03-26 03:19:33.494493 | orchestrator | Thursday 26 March 2026 03:19:22 +0000 (0:00:00.950) 0:04:04.847 ******** 2026-03-26 03:19:33.494509 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:19:33.494524 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:19:33.494540 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:19:33.494558 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-26 03:19:33.494575 | orchestrator | 2026-03-26 03:19:33.494591 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-26 03:19:33.494680 | orchestrator | Thursday 26 March 2026 03:19:23 +0000 (0:00:00.895) 0:04:05.742 ******** 2026-03-26 03:19:33.494702 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2026-03-26 03:19:33.494718 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2026-03-26 03:19:33.494734 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2026-03-26 03:19:33.494749 | orchestrator | 2026-03-26 03:19:33.494766 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-26 03:19:33.494782 | orchestrator | Thursday 26 March 2026 03:19:24 +0000 (0:00:00.866) 0:04:06.609 ******** 2026-03-26 03:19:33.494797 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2026-03-26 03:19:33.494812 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2026-03-26 03:19:33.494827 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2026-03-26 03:19:33.494843 | orchestrator | 2026-03-26 03:19:33.494859 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-26 03:19:33.494874 | orchestrator | Thursday 26 March 2026 03:19:25 +0000 (0:00:01.247) 0:04:07.856 ******** 2026-03-26 03:19:33.494890 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2026-03-26 03:19:33.494906 | orchestrator | skipping: [testbed-node-3] 2026-03-26 03:19:33.494921 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2026-03-26 03:19:33.494938 | orchestrator | skipping: [testbed-node-4] 2026-03-26 03:19:33.494953 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2026-03-26 03:19:33.494970 | orchestrator | skipping: [testbed-node-5] 2026-03-26 03:19:33.494985 | orchestrator | 2026-03-26 03:19:33.495002 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2026-03-26 03:19:33.495018 | orchestrator | Thursday 26 March 2026 03:19:26 +0000 (0:00:00.604) 0:04:08.461 ******** 2026-03-26 03:19:33.495033 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2026-03-26 03:19:33.495049 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-26 03:19:33.495065 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-26 03:19:33.495081 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:19:33.495097 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-26 03:19:33.495113 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-26 03:19:33.495129 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:19:33.495145 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-26 03:19:33.495187 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-26 03:19:33.495204 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:19:33.495220 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2026-03-26 03:19:33.495235 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-03-26 03:19:33.495251 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2026-03-26 03:19:33.495280 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-03-26 03:19:33.495296 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-03-26 03:19:33.495311 | orchestrator | 2026-03-26 03:19:33.495327 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2026-03-26 03:19:33.495343 | orchestrator | Thursday 26 March 2026 03:19:28 +0000 (0:00:02.015) 0:04:10.477 ******** 2026-03-26 03:19:33.495359 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:19:33.495376 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:19:33.495394 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:19:33.495409 | orchestrator | changed: [testbed-node-3] 2026-03-26 03:19:33.495425 | orchestrator | changed: [testbed-node-4] 2026-03-26 03:19:33.495440 | orchestrator | changed: [testbed-node-5] 2026-03-26 03:19:33.495455 | orchestrator | 2026-03-26 03:19:33.495471 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2026-03-26 03:19:33.495486 | orchestrator | Thursday 26 March 2026 03:19:29 +0000 (0:00:01.311) 0:04:11.789 ******** 2026-03-26 03:19:33.495501 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:19:33.495517 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:19:33.495532 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:19:33.495548 | orchestrator | changed: [testbed-node-4] 2026-03-26 03:19:33.495565 | orchestrator | changed: [testbed-node-5] 2026-03-26 03:19:33.495580 | orchestrator | changed: [testbed-node-3] 2026-03-26 03:19:33.495597 | orchestrator | 2026-03-26 03:19:33.495635 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-03-26 03:19:33.495652 | orchestrator | Thursday 26 March 2026 03:19:31 +0000 (0:00:01.830) 0:04:13.619 ******** 2026-03-26 03:19:33.495737 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-26 03:19:33.495760 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-26 03:19:33.495789 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-26 03:19:35.386853 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-26 03:19:35.386922 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-26 03:19:35.386939 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-26 03:19:35.386945 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-26 03:19:35.386950 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-26 03:19:35.386956 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-26 03:19:35.386984 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-26 03:19:35.386989 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-26 03:19:35.386996 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-26 03:19:35.387001 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-26 03:19:35.387006 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-26 03:19:35.387010 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-26 03:19:35.387026 | orchestrator | 2026-03-26 03:19:35.387032 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-26 03:19:35.387038 | orchestrator | Thursday 26 March 2026 03:19:33 +0000 (0:00:02.391) 0:04:16.011 ******** 2026-03-26 03:19:35.387042 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 03:19:35.387048 | orchestrator | 2026-03-26 03:19:35.387052 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-03-26 03:19:35.387060 | orchestrator | Thursday 26 March 2026 03:19:35 +0000 (0:00:01.483) 0:04:17.495 ******** 2026-03-26 03:19:38.791480 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-26 03:19:38.791628 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-26 03:19:38.791643 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-26 03:19:38.791652 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-26 03:19:38.791675 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-26 03:19:38.791696 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-26 03:19:38.791702 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-26 03:19:38.791714 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-26 03:19:38.791721 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-26 03:19:38.791727 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-26 03:19:38.791737 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-26 03:19:38.791743 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-26 03:19:38.791756 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-26 03:19:40.427243 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-26 03:19:40.427360 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-26 03:19:40.427373 | orchestrator | 2026-03-26 03:19:40.427385 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-03-26 03:19:40.427396 | orchestrator | Thursday 26 March 2026 03:19:39 +0000 (0:00:03.709) 0:04:21.204 ******** 2026-03-26 03:19:40.427409 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-26 03:19:40.427446 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-26 03:19:40.427457 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-26 03:19:40.427467 | orchestrator | skipping: [testbed-node-3] 2026-03-26 03:19:40.427500 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-26 03:19:40.427510 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-26 03:19:40.427520 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-26 03:19:40.427548 | orchestrator | skipping: [testbed-node-4] 2026-03-26 03:19:40.427559 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-26 03:19:40.427569 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-26 03:19:40.427587 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-26 03:19:42.548895 | orchestrator | skipping: [testbed-node-5] 2026-03-26 03:19:42.549011 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-26 03:19:42.549031 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-26 03:19:42.549065 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:19:42.549078 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-26 03:19:42.549090 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-26 03:19:42.549102 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:19:42.549113 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-26 03:19:42.549125 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-26 03:19:42.549137 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:19:42.549148 | orchestrator | 2026-03-26 03:19:42.549161 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-03-26 03:19:42.549174 | orchestrator | Thursday 26 March 2026 03:19:40 +0000 (0:00:01.779) 0:04:22.983 ******** 2026-03-26 03:19:42.549224 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-26 03:19:42.549257 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-26 03:19:42.549278 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-26 03:19:42.549299 | orchestrator | skipping: [testbed-node-3] 2026-03-26 03:19:42.549319 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-26 03:19:42.549338 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-26 03:19:42.549378 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-26 03:19:50.125896 | orchestrator | skipping: [testbed-node-4] 2026-03-26 03:19:50.126110 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-26 03:19:50.126162 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-26 03:19:50.126179 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-26 03:19:50.126193 | orchestrator | skipping: [testbed-node-5] 2026-03-26 03:19:50.126207 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-26 03:19:50.126221 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-26 03:19:50.126233 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:19:50.126283 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-26 03:19:50.126308 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-26 03:19:50.126322 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:19:50.126335 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-26 03:19:50.126347 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-26 03:19:50.126359 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:19:50.126371 | orchestrator | 2026-03-26 03:19:50.126385 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-26 03:19:50.126400 | orchestrator | Thursday 26 March 2026 03:19:43 +0000 (0:00:02.222) 0:04:25.205 ******** 2026-03-26 03:19:50.126412 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:19:50.126424 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:19:50.126436 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:19:50.126451 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-26 03:19:50.126463 | orchestrator | 2026-03-26 03:19:50.126478 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2026-03-26 03:19:50.126491 | orchestrator | Thursday 26 March 2026 03:19:44 +0000 (0:00:01.124) 0:04:26.330 ******** 2026-03-26 03:19:50.126505 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-26 03:19:50.126519 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-26 03:19:50.126533 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-26 03:19:50.126547 | orchestrator | 2026-03-26 03:19:50.126561 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2026-03-26 03:19:50.126633 | orchestrator | Thursday 26 March 2026 03:19:45 +0000 (0:00:01.156) 0:04:27.487 ******** 2026-03-26 03:19:50.126649 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-26 03:19:50.126662 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-26 03:19:50.126675 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-26 03:19:50.126688 | orchestrator | 2026-03-26 03:19:50.126701 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2026-03-26 03:19:50.126716 | orchestrator | Thursday 26 March 2026 03:19:46 +0000 (0:00:00.973) 0:04:28.461 ******** 2026-03-26 03:19:50.126740 | orchestrator | ok: [testbed-node-3] 2026-03-26 03:19:50.126753 | orchestrator | ok: [testbed-node-4] 2026-03-26 03:19:50.126766 | orchestrator | ok: [testbed-node-5] 2026-03-26 03:19:50.126779 | orchestrator | 2026-03-26 03:19:50.126790 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2026-03-26 03:19:50.126803 | orchestrator | Thursday 26 March 2026 03:19:46 +0000 (0:00:00.558) 0:04:29.020 ******** 2026-03-26 03:19:50.126816 | orchestrator | ok: [testbed-node-3] 2026-03-26 03:19:50.126829 | orchestrator | ok: [testbed-node-4] 2026-03-26 03:19:50.126841 | orchestrator | ok: [testbed-node-5] 2026-03-26 03:19:50.126854 | orchestrator | 2026-03-26 03:19:50.126866 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2026-03-26 03:19:50.126879 | orchestrator | Thursday 26 March 2026 03:19:47 +0000 (0:00:00.540) 0:04:29.560 ******** 2026-03-26 03:19:50.126892 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-03-26 03:19:50.126908 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-03-26 03:19:50.126921 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-03-26 03:19:50.126935 | orchestrator | 2026-03-26 03:19:50.126948 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2026-03-26 03:19:50.126961 | orchestrator | Thursday 26 March 2026 03:19:48 +0000 (0:00:01.434) 0:04:30.995 ******** 2026-03-26 03:19:50.126995 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-03-26 03:20:09.061680 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-03-26 03:20:09.061762 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-03-26 03:20:09.061769 | orchestrator | 2026-03-26 03:20:09.061776 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2026-03-26 03:20:09.061783 | orchestrator | Thursday 26 March 2026 03:19:50 +0000 (0:00:01.238) 0:04:32.233 ******** 2026-03-26 03:20:09.061788 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-03-26 03:20:09.061793 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-03-26 03:20:09.061799 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-03-26 03:20:09.061803 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2026-03-26 03:20:09.061809 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2026-03-26 03:20:09.061814 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2026-03-26 03:20:09.061818 | orchestrator | 2026-03-26 03:20:09.061824 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2026-03-26 03:20:09.061829 | orchestrator | Thursday 26 March 2026 03:19:54 +0000 (0:00:03.905) 0:04:36.138 ******** 2026-03-26 03:20:09.061834 | orchestrator | skipping: [testbed-node-3] 2026-03-26 03:20:09.061840 | orchestrator | skipping: [testbed-node-4] 2026-03-26 03:20:09.061844 | orchestrator | skipping: [testbed-node-5] 2026-03-26 03:20:09.061849 | orchestrator | 2026-03-26 03:20:09.061854 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2026-03-26 03:20:09.061859 | orchestrator | Thursday 26 March 2026 03:19:54 +0000 (0:00:00.346) 0:04:36.484 ******** 2026-03-26 03:20:09.061864 | orchestrator | skipping: [testbed-node-3] 2026-03-26 03:20:09.061869 | orchestrator | skipping: [testbed-node-4] 2026-03-26 03:20:09.061874 | orchestrator | skipping: [testbed-node-5] 2026-03-26 03:20:09.061879 | orchestrator | 2026-03-26 03:20:09.061884 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2026-03-26 03:20:09.061889 | orchestrator | Thursday 26 March 2026 03:19:54 +0000 (0:00:00.555) 0:04:37.040 ******** 2026-03-26 03:20:09.061894 | orchestrator | changed: [testbed-node-3] 2026-03-26 03:20:09.061899 | orchestrator | changed: [testbed-node-4] 2026-03-26 03:20:09.061904 | orchestrator | changed: [testbed-node-5] 2026-03-26 03:20:09.061910 | orchestrator | 2026-03-26 03:20:09.061917 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2026-03-26 03:20:09.061929 | orchestrator | Thursday 26 March 2026 03:19:56 +0000 (0:00:01.266) 0:04:38.306 ******** 2026-03-26 03:20:09.061962 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-03-26 03:20:09.061971 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-03-26 03:20:09.061979 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-03-26 03:20:09.061986 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-03-26 03:20:09.061994 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-03-26 03:20:09.062002 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-03-26 03:20:09.062010 | orchestrator | 2026-03-26 03:20:09.062064 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2026-03-26 03:20:09.062070 | orchestrator | Thursday 26 March 2026 03:19:59 +0000 (0:00:03.341) 0:04:41.648 ******** 2026-03-26 03:20:09.062075 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-26 03:20:09.062080 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-26 03:20:09.062085 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-26 03:20:09.062090 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-26 03:20:09.062095 | orchestrator | changed: [testbed-node-3] 2026-03-26 03:20:09.062100 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-26 03:20:09.062105 | orchestrator | changed: [testbed-node-4] 2026-03-26 03:20:09.062110 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-26 03:20:09.062115 | orchestrator | changed: [testbed-node-5] 2026-03-26 03:20:09.062119 | orchestrator | 2026-03-26 03:20:09.062124 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2026-03-26 03:20:09.062129 | orchestrator | Thursday 26 March 2026 03:20:02 +0000 (0:00:03.453) 0:04:45.101 ******** 2026-03-26 03:20:09.062134 | orchestrator | skipping: [testbed-node-3] 2026-03-26 03:20:09.062139 | orchestrator | 2026-03-26 03:20:09.062163 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2026-03-26 03:20:09.062170 | orchestrator | Thursday 26 March 2026 03:20:03 +0000 (0:00:00.149) 0:04:45.250 ******** 2026-03-26 03:20:09.062175 | orchestrator | skipping: [testbed-node-3] 2026-03-26 03:20:09.062180 | orchestrator | skipping: [testbed-node-4] 2026-03-26 03:20:09.062184 | orchestrator | skipping: [testbed-node-5] 2026-03-26 03:20:09.062189 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:20:09.062194 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:20:09.062199 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:20:09.062204 | orchestrator | 2026-03-26 03:20:09.062209 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2026-03-26 03:20:09.062214 | orchestrator | Thursday 26 March 2026 03:20:04 +0000 (0:00:00.932) 0:04:46.183 ******** 2026-03-26 03:20:09.062218 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-26 03:20:09.062223 | orchestrator | 2026-03-26 03:20:09.062228 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2026-03-26 03:20:09.062233 | orchestrator | Thursday 26 March 2026 03:20:04 +0000 (0:00:00.737) 0:04:46.921 ******** 2026-03-26 03:20:09.062249 | orchestrator | skipping: [testbed-node-3] 2026-03-26 03:20:09.062267 | orchestrator | skipping: [testbed-node-4] 2026-03-26 03:20:09.062274 | orchestrator | skipping: [testbed-node-5] 2026-03-26 03:20:09.062279 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:20:09.062285 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:20:09.062291 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:20:09.062297 | orchestrator | 2026-03-26 03:20:09.062303 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2026-03-26 03:20:09.062309 | orchestrator | Thursday 26 March 2026 03:20:05 +0000 (0:00:00.852) 0:04:47.773 ******** 2026-03-26 03:20:09.062324 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-26 03:20:09.062333 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-26 03:20:09.062339 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-26 03:20:09.062347 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-26 03:20:09.062362 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-26 03:20:14.080379 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-26 03:20:14.080470 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-26 03:20:14.080479 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-26 03:20:14.080485 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-26 03:20:14.080490 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-26 03:20:14.080495 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-26 03:20:14.080522 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-26 03:20:14.080564 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-26 03:20:14.080575 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-26 03:20:14.080583 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-26 03:20:14.080602 | orchestrator | 2026-03-26 03:20:14.081246 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2026-03-26 03:20:14.081275 | orchestrator | Thursday 26 March 2026 03:20:09 +0000 (0:00:03.846) 0:04:51.619 ******** 2026-03-26 03:20:14.081284 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-26 03:20:14.081365 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-26 03:20:16.490625 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-26 03:20:16.490732 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-26 03:20:16.490750 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-26 03:20:16.490763 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-26 03:20:16.490776 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-26 03:20:16.490829 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-26 03:20:16.490862 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-26 03:20:16.490875 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-26 03:20:16.490886 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-26 03:20:16.490899 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-26 03:20:16.490910 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-26 03:20:16.490935 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-26 03:20:16.490955 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-26 03:20:38.939952 | orchestrator | 2026-03-26 03:20:38.940054 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2026-03-26 03:20:38.940072 | orchestrator | Thursday 26 March 2026 03:20:16 +0000 (0:00:06.982) 0:04:58.601 ******** 2026-03-26 03:20:38.940084 | orchestrator | skipping: [testbed-node-3] 2026-03-26 03:20:38.940097 | orchestrator | skipping: [testbed-node-5] 2026-03-26 03:20:38.940110 | orchestrator | skipping: [testbed-node-4] 2026-03-26 03:20:38.940121 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:20:38.940132 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:20:38.940139 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:20:38.940146 | orchestrator | 2026-03-26 03:20:38.940152 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2026-03-26 03:20:38.940160 | orchestrator | Thursday 26 March 2026 03:20:17 +0000 (0:00:01.365) 0:04:59.966 ******** 2026-03-26 03:20:38.940166 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-03-26 03:20:38.940174 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-03-26 03:20:38.940180 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-03-26 03:20:38.940187 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-03-26 03:20:38.940193 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-03-26 03:20:38.940199 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-03-26 03:20:38.940206 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-03-26 03:20:38.940214 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:20:38.940220 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-03-26 03:20:38.940226 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:20:38.940233 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-03-26 03:20:38.940239 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:20:38.940245 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-03-26 03:20:38.940252 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-03-26 03:20:38.940279 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-03-26 03:20:38.940285 | orchestrator | 2026-03-26 03:20:38.940293 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2026-03-26 03:20:38.940300 | orchestrator | Thursday 26 March 2026 03:20:21 +0000 (0:00:03.720) 0:05:03.687 ******** 2026-03-26 03:20:38.940306 | orchestrator | skipping: [testbed-node-3] 2026-03-26 03:20:38.940312 | orchestrator | skipping: [testbed-node-4] 2026-03-26 03:20:38.940318 | orchestrator | skipping: [testbed-node-5] 2026-03-26 03:20:38.940325 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:20:38.940331 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:20:38.940337 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:20:38.940343 | orchestrator | 2026-03-26 03:20:38.940350 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2026-03-26 03:20:38.940356 | orchestrator | Thursday 26 March 2026 03:20:22 +0000 (0:00:00.637) 0:05:04.324 ******** 2026-03-26 03:20:38.940362 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-03-26 03:20:38.940369 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-03-26 03:20:38.940375 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-03-26 03:20:38.940382 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-03-26 03:20:38.940388 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-03-26 03:20:38.940394 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-03-26 03:20:38.940413 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-03-26 03:20:38.940419 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-03-26 03:20:38.940426 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-03-26 03:20:38.940432 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-03-26 03:20:38.940438 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:20:38.940445 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-03-26 03:20:38.940451 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:20:38.940457 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-03-26 03:20:38.940464 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:20:38.940470 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-03-26 03:20:38.940535 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-03-26 03:20:38.940546 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-03-26 03:20:38.940558 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-03-26 03:20:38.940568 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-03-26 03:20:38.940579 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-03-26 03:20:38.940589 | orchestrator | 2026-03-26 03:20:38.940599 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2026-03-26 03:20:38.940610 | orchestrator | Thursday 26 March 2026 03:20:27 +0000 (0:00:05.383) 0:05:09.708 ******** 2026-03-26 03:20:38.940631 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-03-26 03:20:38.940642 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-03-26 03:20:38.940652 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-03-26 03:20:38.940663 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-26 03:20:38.940673 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-03-26 03:20:38.940684 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-03-26 03:20:38.940695 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-26 03:20:38.940705 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-26 03:20:38.940715 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-03-26 03:20:38.940726 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-26 03:20:38.940737 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-26 03:20:38.940748 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-26 03:20:38.940758 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-26 03:20:38.940769 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-03-26 03:20:38.940779 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:20:38.940790 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-03-26 03:20:38.940800 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:20:38.940811 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-26 03:20:38.940822 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-03-26 03:20:38.940832 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:20:38.940842 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-26 03:20:38.940853 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-26 03:20:38.940865 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-26 03:20:38.940876 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-26 03:20:38.940888 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-26 03:20:38.940895 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-26 03:20:38.940901 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-26 03:20:38.940907 | orchestrator | 2026-03-26 03:20:38.940920 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2026-03-26 03:20:38.940927 | orchestrator | Thursday 26 March 2026 03:20:35 +0000 (0:00:07.669) 0:05:17.377 ******** 2026-03-26 03:20:38.940933 | orchestrator | skipping: [testbed-node-3] 2026-03-26 03:20:38.940939 | orchestrator | skipping: [testbed-node-4] 2026-03-26 03:20:38.940945 | orchestrator | skipping: [testbed-node-5] 2026-03-26 03:20:38.940951 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:20:38.940958 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:20:38.940964 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:20:38.940970 | orchestrator | 2026-03-26 03:20:38.940976 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2026-03-26 03:20:38.940982 | orchestrator | Thursday 26 March 2026 03:20:36 +0000 (0:00:00.839) 0:05:18.217 ******** 2026-03-26 03:20:38.940988 | orchestrator | skipping: [testbed-node-3] 2026-03-26 03:20:38.941001 | orchestrator | skipping: [testbed-node-4] 2026-03-26 03:20:38.941007 | orchestrator | skipping: [testbed-node-5] 2026-03-26 03:20:38.941014 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:20:38.941025 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:20:38.941031 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:20:38.941038 | orchestrator | 2026-03-26 03:20:38.941044 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2026-03-26 03:20:38.941051 | orchestrator | Thursday 26 March 2026 03:20:36 +0000 (0:00:00.654) 0:05:18.871 ******** 2026-03-26 03:20:38.941062 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:20:38.941073 | orchestrator | changed: [testbed-node-3] 2026-03-26 03:20:38.941092 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:20:40.388844 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:20:40.388965 | orchestrator | changed: [testbed-node-4] 2026-03-26 03:20:40.388991 | orchestrator | changed: [testbed-node-5] 2026-03-26 03:20:40.389012 | orchestrator | 2026-03-26 03:20:40.389034 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2026-03-26 03:20:40.389055 | orchestrator | Thursday 26 March 2026 03:20:38 +0000 (0:00:02.172) 0:05:21.043 ******** 2026-03-26 03:20:40.389080 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-26 03:20:40.389097 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-26 03:20:40.389143 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-26 03:20:40.389157 | orchestrator | skipping: [testbed-node-3] 2026-03-26 03:20:40.389187 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-26 03:20:40.389223 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-26 03:20:40.389256 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-26 03:20:40.389268 | orchestrator | skipping: [testbed-node-4] 2026-03-26 03:20:40.389280 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-26 03:20:40.389291 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-26 03:20:40.389303 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-26 03:20:40.389326 | orchestrator | skipping: [testbed-node-5] 2026-03-26 03:20:40.389339 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-26 03:20:40.389361 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-26 03:20:43.895166 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:20:43.895300 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-26 03:20:43.895322 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-26 03:20:43.895335 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-26 03:20:43.895347 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:20:43.895360 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-26 03:20:43.895395 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:20:43.895407 | orchestrator | 2026-03-26 03:20:43.895420 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2026-03-26 03:20:43.895433 | orchestrator | Thursday 26 March 2026 03:20:40 +0000 (0:00:01.563) 0:05:22.607 ******** 2026-03-26 03:20:43.895445 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-03-26 03:20:43.895456 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-03-26 03:20:43.895537 | orchestrator | skipping: [testbed-node-3] 2026-03-26 03:20:43.895554 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-03-26 03:20:43.895565 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-03-26 03:20:43.895577 | orchestrator | skipping: [testbed-node-4] 2026-03-26 03:20:43.895588 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-03-26 03:20:43.895599 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-03-26 03:20:43.895610 | orchestrator | skipping: [testbed-node-5] 2026-03-26 03:20:43.895621 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-03-26 03:20:43.895632 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-03-26 03:20:43.895643 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:20:43.895654 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-03-26 03:20:43.895665 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-03-26 03:20:43.895676 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:20:43.895687 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-03-26 03:20:43.895701 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-03-26 03:20:43.895713 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:20:43.895726 | orchestrator | 2026-03-26 03:20:43.895739 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2026-03-26 03:20:43.895752 | orchestrator | Thursday 26 March 2026 03:20:41 +0000 (0:00:00.968) 0:05:23.576 ******** 2026-03-26 03:20:43.895785 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-26 03:20:43.895801 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-26 03:20:43.895823 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-26 03:20:43.895843 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-26 03:20:43.895857 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-26 03:20:43.895878 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-26 03:21:36.705607 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-26 03:21:36.705763 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-26 03:21:36.705826 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-26 03:21:36.705849 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-26 03:21:36.705889 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-26 03:21:36.705909 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-26 03:21:36.705956 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-26 03:21:36.705977 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-26 03:21:36.706009 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-26 03:21:36.706117 | orchestrator | 2026-03-26 03:21:36.706140 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-26 03:21:36.706163 | orchestrator | Thursday 26 March 2026 03:20:44 +0000 (0:00:02.706) 0:05:26.282 ******** 2026-03-26 03:21:36.706225 | orchestrator | skipping: [testbed-node-3] 2026-03-26 03:21:36.706298 | orchestrator | skipping: [testbed-node-4] 2026-03-26 03:21:36.706313 | orchestrator | skipping: [testbed-node-5] 2026-03-26 03:21:36.706326 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:21:36.706337 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:21:36.706348 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:21:36.706359 | orchestrator | 2026-03-26 03:21:36.706372 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-26 03:21:36.706391 | orchestrator | Thursday 26 March 2026 03:20:44 +0000 (0:00:00.830) 0:05:27.113 ******** 2026-03-26 03:21:36.706438 | orchestrator | 2026-03-26 03:21:36.706456 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-26 03:21:36.706474 | orchestrator | Thursday 26 March 2026 03:20:45 +0000 (0:00:00.169) 0:05:27.282 ******** 2026-03-26 03:21:36.706492 | orchestrator | 2026-03-26 03:21:36.706511 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-26 03:21:36.706542 | orchestrator | Thursday 26 March 2026 03:20:45 +0000 (0:00:00.149) 0:05:27.432 ******** 2026-03-26 03:21:36.706561 | orchestrator | 2026-03-26 03:21:36.706579 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-26 03:21:36.706598 | orchestrator | Thursday 26 March 2026 03:20:45 +0000 (0:00:00.154) 0:05:27.587 ******** 2026-03-26 03:21:36.706617 | orchestrator | 2026-03-26 03:21:36.706637 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-26 03:21:36.706655 | orchestrator | Thursday 26 March 2026 03:20:45 +0000 (0:00:00.148) 0:05:27.735 ******** 2026-03-26 03:21:36.706670 | orchestrator | 2026-03-26 03:21:36.706681 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-26 03:21:36.706692 | orchestrator | Thursday 26 March 2026 03:20:45 +0000 (0:00:00.324) 0:05:28.060 ******** 2026-03-26 03:21:36.706703 | orchestrator | 2026-03-26 03:21:36.706714 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2026-03-26 03:21:36.706725 | orchestrator | Thursday 26 March 2026 03:20:46 +0000 (0:00:00.157) 0:05:28.218 ******** 2026-03-26 03:21:36.706736 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:21:36.706747 | orchestrator | changed: [testbed-node-1] 2026-03-26 03:21:36.706757 | orchestrator | changed: [testbed-node-2] 2026-03-26 03:21:36.706768 | orchestrator | 2026-03-26 03:21:36.706779 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2026-03-26 03:21:36.706790 | orchestrator | Thursday 26 March 2026 03:20:53 +0000 (0:00:07.475) 0:05:35.693 ******** 2026-03-26 03:21:36.706801 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:21:36.706812 | orchestrator | changed: [testbed-node-1] 2026-03-26 03:21:36.706823 | orchestrator | changed: [testbed-node-2] 2026-03-26 03:21:36.706833 | orchestrator | 2026-03-26 03:21:36.706844 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2026-03-26 03:21:36.706868 | orchestrator | Thursday 26 March 2026 03:21:09 +0000 (0:00:15.667) 0:05:51.361 ******** 2026-03-26 03:21:36.706879 | orchestrator | changed: [testbed-node-4] 2026-03-26 03:21:36.706890 | orchestrator | changed: [testbed-node-5] 2026-03-26 03:21:36.706901 | orchestrator | changed: [testbed-node-3] 2026-03-26 03:21:36.706912 | orchestrator | 2026-03-26 03:21:36.706938 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2026-03-26 03:23:58.016320 | orchestrator | Thursday 26 March 2026 03:21:36 +0000 (0:00:27.431) 0:06:18.792 ******** 2026-03-26 03:23:58.016423 | orchestrator | changed: [testbed-node-4] 2026-03-26 03:23:58.016436 | orchestrator | changed: [testbed-node-3] 2026-03-26 03:23:58.016445 | orchestrator | changed: [testbed-node-5] 2026-03-26 03:23:58.016454 | orchestrator | 2026-03-26 03:23:58.016462 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2026-03-26 03:23:58.016471 | orchestrator | Thursday 26 March 2026 03:22:19 +0000 (0:00:43.096) 0:07:01.889 ******** 2026-03-26 03:23:58.016479 | orchestrator | changed: [testbed-node-4] 2026-03-26 03:23:58.016487 | orchestrator | changed: [testbed-node-3] 2026-03-26 03:23:58.016495 | orchestrator | changed: [testbed-node-5] 2026-03-26 03:23:58.016503 | orchestrator | 2026-03-26 03:23:58.016512 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2026-03-26 03:23:58.016520 | orchestrator | Thursday 26 March 2026 03:22:20 +0000 (0:00:00.751) 0:07:02.640 ******** 2026-03-26 03:23:58.016528 | orchestrator | changed: [testbed-node-3] 2026-03-26 03:23:58.016536 | orchestrator | changed: [testbed-node-4] 2026-03-26 03:23:58.016544 | orchestrator | changed: [testbed-node-5] 2026-03-26 03:23:58.016552 | orchestrator | 2026-03-26 03:23:58.016560 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2026-03-26 03:23:58.016568 | orchestrator | Thursday 26 March 2026 03:22:21 +0000 (0:00:00.848) 0:07:03.489 ******** 2026-03-26 03:23:58.016576 | orchestrator | changed: [testbed-node-4] 2026-03-26 03:23:58.016583 | orchestrator | changed: [testbed-node-3] 2026-03-26 03:23:58.016592 | orchestrator | changed: [testbed-node-5] 2026-03-26 03:23:58.016600 | orchestrator | 2026-03-26 03:23:58.016619 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2026-03-26 03:23:58.016628 | orchestrator | Thursday 26 March 2026 03:22:47 +0000 (0:00:25.699) 0:07:29.189 ******** 2026-03-26 03:23:58.016636 | orchestrator | skipping: [testbed-node-3] 2026-03-26 03:23:58.016644 | orchestrator | 2026-03-26 03:23:58.016652 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2026-03-26 03:23:58.016660 | orchestrator | Thursday 26 March 2026 03:22:47 +0000 (0:00:00.148) 0:07:29.337 ******** 2026-03-26 03:23:58.016668 | orchestrator | skipping: [testbed-node-5] 2026-03-26 03:23:58.016676 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:23:58.016684 | orchestrator | skipping: [testbed-node-3] 2026-03-26 03:23:58.016692 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:23:58.016700 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:23:58.016708 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2026-03-26 03:23:58.016718 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-03-26 03:23:58.016726 | orchestrator | 2026-03-26 03:23:58.016734 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2026-03-26 03:23:58.016742 | orchestrator | Thursday 26 March 2026 03:23:10 +0000 (0:00:22.999) 0:07:52.337 ******** 2026-03-26 03:23:58.016750 | orchestrator | skipping: [testbed-node-4] 2026-03-26 03:23:58.016758 | orchestrator | skipping: [testbed-node-3] 2026-03-26 03:23:58.016766 | orchestrator | skipping: [testbed-node-5] 2026-03-26 03:23:58.016774 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:23:58.016782 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:23:58.016790 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:23:58.016798 | orchestrator | 2026-03-26 03:23:58.016806 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2026-03-26 03:23:58.016836 | orchestrator | Thursday 26 March 2026 03:23:20 +0000 (0:00:10.321) 0:08:02.658 ******** 2026-03-26 03:23:58.016846 | orchestrator | skipping: [testbed-node-3] 2026-03-26 03:23:58.016856 | orchestrator | skipping: [testbed-node-5] 2026-03-26 03:23:58.016866 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:23:58.016874 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:23:58.016884 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:23:58.016894 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-4 2026-03-26 03:23:58.016903 | orchestrator | 2026-03-26 03:23:58.016925 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-03-26 03:23:58.016935 | orchestrator | Thursday 26 March 2026 03:23:25 +0000 (0:00:04.471) 0:08:07.130 ******** 2026-03-26 03:23:58.016944 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-03-26 03:23:58.016954 | orchestrator | 2026-03-26 03:23:58.016963 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-03-26 03:23:58.016972 | orchestrator | Thursday 26 March 2026 03:23:37 +0000 (0:00:12.681) 0:08:19.811 ******** 2026-03-26 03:23:58.016982 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-03-26 03:23:58.016991 | orchestrator | 2026-03-26 03:23:58.017000 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2026-03-26 03:23:58.017009 | orchestrator | Thursday 26 March 2026 03:23:39 +0000 (0:00:01.569) 0:08:21.381 ******** 2026-03-26 03:23:58.017019 | orchestrator | skipping: [testbed-node-4] 2026-03-26 03:23:58.017028 | orchestrator | 2026-03-26 03:23:58.017037 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2026-03-26 03:23:58.017046 | orchestrator | Thursday 26 March 2026 03:23:41 +0000 (0:00:01.801) 0:08:23.182 ******** 2026-03-26 03:23:58.017056 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-03-26 03:23:58.017066 | orchestrator | 2026-03-26 03:23:58.017074 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2026-03-26 03:23:58.017082 | orchestrator | Thursday 26 March 2026 03:23:52 +0000 (0:00:11.180) 0:08:34.362 ******** 2026-03-26 03:23:58.017090 | orchestrator | ok: [testbed-node-3] 2026-03-26 03:23:58.017098 | orchestrator | ok: [testbed-node-4] 2026-03-26 03:23:58.017106 | orchestrator | ok: [testbed-node-5] 2026-03-26 03:23:58.017114 | orchestrator | ok: [testbed-node-0] 2026-03-26 03:23:58.017122 | orchestrator | ok: [testbed-node-1] 2026-03-26 03:23:58.017130 | orchestrator | ok: [testbed-node-2] 2026-03-26 03:23:58.017138 | orchestrator | 2026-03-26 03:23:58.017146 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2026-03-26 03:23:58.017154 | orchestrator | 2026-03-26 03:23:58.017162 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2026-03-26 03:23:58.017184 | orchestrator | Thursday 26 March 2026 03:23:54 +0000 (0:00:01.936) 0:08:36.298 ******** 2026-03-26 03:23:58.017193 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:23:58.017226 | orchestrator | changed: [testbed-node-1] 2026-03-26 03:23:58.017240 | orchestrator | changed: [testbed-node-2] 2026-03-26 03:23:58.017254 | orchestrator | 2026-03-26 03:23:58.017268 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2026-03-26 03:23:58.017281 | orchestrator | 2026-03-26 03:23:58.017295 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2026-03-26 03:23:58.017304 | orchestrator | Thursday 26 March 2026 03:23:55 +0000 (0:00:00.954) 0:08:37.253 ******** 2026-03-26 03:23:58.017312 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:23:58.017320 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:23:58.017328 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:23:58.017336 | orchestrator | 2026-03-26 03:23:58.017344 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2026-03-26 03:23:58.017352 | orchestrator | 2026-03-26 03:23:58.017360 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2026-03-26 03:23:58.017368 | orchestrator | Thursday 26 March 2026 03:23:55 +0000 (0:00:00.770) 0:08:38.023 ******** 2026-03-26 03:23:58.017384 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2026-03-26 03:23:58.017392 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-03-26 03:23:58.017401 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-03-26 03:23:58.017409 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2026-03-26 03:23:58.017417 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2026-03-26 03:23:58.017446 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2026-03-26 03:23:58.017455 | orchestrator | skipping: [testbed-node-3] 2026-03-26 03:23:58.017463 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2026-03-26 03:23:58.017471 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-03-26 03:23:58.017479 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-03-26 03:23:58.017486 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2026-03-26 03:23:58.017494 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2026-03-26 03:23:58.017502 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2026-03-26 03:23:58.017510 | orchestrator | skipping: [testbed-node-4] 2026-03-26 03:23:58.017518 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2026-03-26 03:23:58.017526 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-03-26 03:23:58.017534 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-03-26 03:23:58.017542 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2026-03-26 03:23:58.017550 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2026-03-26 03:23:58.017558 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2026-03-26 03:23:58.017565 | orchestrator | skipping: [testbed-node-5] 2026-03-26 03:23:58.017573 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2026-03-26 03:23:58.017581 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-03-26 03:23:58.017589 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-03-26 03:23:58.017597 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2026-03-26 03:23:58.017605 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2026-03-26 03:23:58.017613 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2026-03-26 03:23:58.017621 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:23:58.017629 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2026-03-26 03:23:58.017636 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-03-26 03:23:58.017650 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-03-26 03:23:58.017658 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2026-03-26 03:23:58.017666 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2026-03-26 03:23:58.017674 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2026-03-26 03:23:58.017681 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:23:58.017689 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2026-03-26 03:23:58.017697 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-03-26 03:23:58.017705 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-03-26 03:23:58.017713 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2026-03-26 03:23:58.017721 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2026-03-26 03:23:58.017729 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2026-03-26 03:23:58.017737 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:23:58.017745 | orchestrator | 2026-03-26 03:23:58.017753 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2026-03-26 03:23:58.017761 | orchestrator | 2026-03-26 03:23:58.017769 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2026-03-26 03:23:58.017789 | orchestrator | Thursday 26 March 2026 03:23:57 +0000 (0:00:01.448) 0:08:39.472 ******** 2026-03-26 03:23:58.017797 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2026-03-26 03:23:58.017805 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2026-03-26 03:23:58.017813 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:23:58.017821 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2026-03-26 03:23:58.017829 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2026-03-26 03:23:58.017837 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:23:58.017845 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2026-03-26 03:23:58.017853 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2026-03-26 03:23:58.017861 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:23:58.017869 | orchestrator | 2026-03-26 03:23:58.017882 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2026-03-26 03:23:59.921187 | orchestrator | 2026-03-26 03:23:59.921330 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2026-03-26 03:23:59.921345 | orchestrator | Thursday 26 March 2026 03:23:58 +0000 (0:00:00.647) 0:08:40.119 ******** 2026-03-26 03:23:59.921355 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:23:59.921364 | orchestrator | 2026-03-26 03:23:59.921372 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2026-03-26 03:23:59.921381 | orchestrator | 2026-03-26 03:23:59.921389 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2026-03-26 03:23:59.921397 | orchestrator | Thursday 26 March 2026 03:23:58 +0000 (0:00:00.986) 0:08:41.106 ******** 2026-03-26 03:23:59.921405 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:23:59.921413 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:23:59.921422 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:23:59.921430 | orchestrator | 2026-03-26 03:23:59.921438 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-26 03:23:59.921446 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-26 03:23:59.921457 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2026-03-26 03:23:59.921465 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2026-03-26 03:23:59.921474 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2026-03-26 03:23:59.921485 | orchestrator | testbed-node-3 : ok=38  changed=27  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-03-26 03:23:59.921498 | orchestrator | testbed-node-4 : ok=42  changed=27  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2026-03-26 03:23:59.921512 | orchestrator | testbed-node-5 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-03-26 03:23:59.921526 | orchestrator | 2026-03-26 03:23:59.921539 | orchestrator | 2026-03-26 03:23:59.921553 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-26 03:23:59.921566 | orchestrator | Thursday 26 March 2026 03:23:59 +0000 (0:00:00.495) 0:08:41.601 ******** 2026-03-26 03:23:59.921579 | orchestrator | =============================================================================== 2026-03-26 03:23:59.921594 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 43.10s 2026-03-26 03:23:59.921607 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 32.74s 2026-03-26 03:23:59.921620 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 27.43s 2026-03-26 03:23:59.921663 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 25.70s 2026-03-26 03:23:59.921678 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 23.88s 2026-03-26 03:23:59.921693 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 23.00s 2026-03-26 03:23:59.921707 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 21.76s 2026-03-26 03:23:59.921739 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 17.28s 2026-03-26 03:23:59.921750 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 15.67s 2026-03-26 03:23:59.921759 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 14.47s 2026-03-26 03:23:59.921768 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.68s 2026-03-26 03:23:59.921777 | orchestrator | nova-cell : Create cell ------------------------------------------------ 12.61s 2026-03-26 03:23:59.921786 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.35s 2026-03-26 03:23:59.921795 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 11.50s 2026-03-26 03:23:59.921804 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 11.18s 2026-03-26 03:23:59.921813 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------ 10.32s 2026-03-26 03:23:59.921822 | orchestrator | nova-cell : Copying files for nova-ssh ---------------------------------- 7.67s 2026-03-26 03:23:59.921831 | orchestrator | nova-cell : Restart nova-conductor container ---------------------------- 7.48s 2026-03-26 03:23:59.921840 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 7.38s 2026-03-26 03:23:59.921849 | orchestrator | service-ks-register : nova | Granting user roles ------------------------ 7.35s 2026-03-26 03:24:02.483817 | orchestrator | 2026-03-26 03:24:02 | INFO  | Task f721f98e-b0b6-4aeb-a88b-3394e455017a (horizon) was prepared for execution. 2026-03-26 03:24:02.483926 | orchestrator | 2026-03-26 03:24:02 | INFO  | It takes a moment until task f721f98e-b0b6-4aeb-a88b-3394e455017a (horizon) has been started and output is visible here. 2026-03-26 03:24:10.048744 | orchestrator | 2026-03-26 03:24:10.048826 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-26 03:24:10.048833 | orchestrator | 2026-03-26 03:24:10.048838 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-26 03:24:10.048842 | orchestrator | Thursday 26 March 2026 03:24:06 +0000 (0:00:00.301) 0:00:00.301 ******** 2026-03-26 03:24:10.048847 | orchestrator | ok: [testbed-node-0] 2026-03-26 03:24:10.048851 | orchestrator | ok: [testbed-node-1] 2026-03-26 03:24:10.048855 | orchestrator | ok: [testbed-node-2] 2026-03-26 03:24:10.048859 | orchestrator | 2026-03-26 03:24:10.048863 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-26 03:24:10.048867 | orchestrator | Thursday 26 March 2026 03:24:07 +0000 (0:00:00.356) 0:00:00.657 ******** 2026-03-26 03:24:10.048871 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2026-03-26 03:24:10.048876 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2026-03-26 03:24:10.048880 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2026-03-26 03:24:10.048883 | orchestrator | 2026-03-26 03:24:10.048888 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2026-03-26 03:24:10.048892 | orchestrator | 2026-03-26 03:24:10.048896 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-26 03:24:10.048900 | orchestrator | Thursday 26 March 2026 03:24:07 +0000 (0:00:00.454) 0:00:01.112 ******** 2026-03-26 03:24:10.048904 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 03:24:10.048908 | orchestrator | 2026-03-26 03:24:10.048912 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2026-03-26 03:24:10.048916 | orchestrator | Thursday 26 March 2026 03:24:08 +0000 (0:00:00.535) 0:00:01.647 ******** 2026-03-26 03:24:10.048951 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-26 03:24:10.048968 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-26 03:24:10.048982 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-26 03:24:10.048987 | orchestrator | 2026-03-26 03:24:10.048991 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2026-03-26 03:24:10.048994 | orchestrator | Thursday 26 March 2026 03:24:09 +0000 (0:00:01.204) 0:00:02.852 ******** 2026-03-26 03:24:10.048998 | orchestrator | ok: [testbed-node-0] 2026-03-26 03:24:10.049002 | orchestrator | ok: [testbed-node-1] 2026-03-26 03:24:10.049006 | orchestrator | ok: [testbed-node-2] 2026-03-26 03:24:10.049009 | orchestrator | 2026-03-26 03:24:10.049013 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-26 03:24:10.049017 | orchestrator | Thursday 26 March 2026 03:24:09 +0000 (0:00:00.538) 0:00:03.390 ******** 2026-03-26 03:24:10.049023 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2026-03-26 03:24:16.545172 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2026-03-26 03:24:16.545360 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2026-03-26 03:24:16.545377 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2026-03-26 03:24:16.545390 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2026-03-26 03:24:16.545402 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2026-03-26 03:24:16.545413 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2026-03-26 03:24:16.545424 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2026-03-26 03:24:16.545486 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2026-03-26 03:24:16.545499 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2026-03-26 03:24:16.545510 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2026-03-26 03:24:16.545522 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2026-03-26 03:24:16.545532 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2026-03-26 03:24:16.545543 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2026-03-26 03:24:16.545554 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2026-03-26 03:24:16.545565 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2026-03-26 03:24:16.545576 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2026-03-26 03:24:16.545587 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2026-03-26 03:24:16.545598 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2026-03-26 03:24:16.545608 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2026-03-26 03:24:16.545619 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2026-03-26 03:24:16.545630 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2026-03-26 03:24:16.545641 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2026-03-26 03:24:16.545652 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2026-03-26 03:24:16.545664 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2026-03-26 03:24:16.545677 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2026-03-26 03:24:16.545688 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2026-03-26 03:24:16.545699 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2026-03-26 03:24:16.545728 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2026-03-26 03:24:16.545741 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2026-03-26 03:24:16.545754 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2026-03-26 03:24:16.545766 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2026-03-26 03:24:16.545780 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2026-03-26 03:24:16.545793 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2026-03-26 03:24:16.545806 | orchestrator | 2026-03-26 03:24:16.545820 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-26 03:24:16.545833 | orchestrator | Thursday 26 March 2026 03:24:10 +0000 (0:00:00.825) 0:00:04.216 ******** 2026-03-26 03:24:16.545847 | orchestrator | ok: [testbed-node-0] 2026-03-26 03:24:16.545867 | orchestrator | ok: [testbed-node-1] 2026-03-26 03:24:16.545880 | orchestrator | ok: [testbed-node-2] 2026-03-26 03:24:16.545892 | orchestrator | 2026-03-26 03:24:16.545905 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-26 03:24:16.545918 | orchestrator | Thursday 26 March 2026 03:24:11 +0000 (0:00:00.371) 0:00:04.588 ******** 2026-03-26 03:24:16.545930 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:24:16.545944 | orchestrator | 2026-03-26 03:24:16.545974 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-26 03:24:16.545987 | orchestrator | Thursday 26 March 2026 03:24:11 +0000 (0:00:00.330) 0:00:04.919 ******** 2026-03-26 03:24:16.546000 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:24:16.546074 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:24:16.546089 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:24:16.546102 | orchestrator | 2026-03-26 03:24:16.546115 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-26 03:24:16.546127 | orchestrator | Thursday 26 March 2026 03:24:11 +0000 (0:00:00.325) 0:00:05.244 ******** 2026-03-26 03:24:16.546139 | orchestrator | ok: [testbed-node-0] 2026-03-26 03:24:16.546150 | orchestrator | ok: [testbed-node-1] 2026-03-26 03:24:16.546161 | orchestrator | ok: [testbed-node-2] 2026-03-26 03:24:16.546172 | orchestrator | 2026-03-26 03:24:16.546204 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-26 03:24:16.546216 | orchestrator | Thursday 26 March 2026 03:24:12 +0000 (0:00:00.344) 0:00:05.589 ******** 2026-03-26 03:24:16.546228 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:24:16.546239 | orchestrator | 2026-03-26 03:24:16.546249 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-26 03:24:16.546260 | orchestrator | Thursday 26 March 2026 03:24:12 +0000 (0:00:00.149) 0:00:05.738 ******** 2026-03-26 03:24:16.546271 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:24:16.546283 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:24:16.546294 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:24:16.546305 | orchestrator | 2026-03-26 03:24:16.546315 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-26 03:24:16.546326 | orchestrator | Thursday 26 March 2026 03:24:12 +0000 (0:00:00.341) 0:00:06.080 ******** 2026-03-26 03:24:16.546337 | orchestrator | ok: [testbed-node-0] 2026-03-26 03:24:16.546348 | orchestrator | ok: [testbed-node-1] 2026-03-26 03:24:16.546359 | orchestrator | ok: [testbed-node-2] 2026-03-26 03:24:16.546370 | orchestrator | 2026-03-26 03:24:16.546381 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-26 03:24:16.546392 | orchestrator | Thursday 26 March 2026 03:24:13 +0000 (0:00:00.540) 0:00:06.620 ******** 2026-03-26 03:24:16.546403 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:24:16.546413 | orchestrator | 2026-03-26 03:24:16.546424 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-26 03:24:16.546435 | orchestrator | Thursday 26 March 2026 03:24:13 +0000 (0:00:00.134) 0:00:06.755 ******** 2026-03-26 03:24:16.546446 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:24:16.546457 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:24:16.546468 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:24:16.546478 | orchestrator | 2026-03-26 03:24:16.546489 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-26 03:24:16.546500 | orchestrator | Thursday 26 March 2026 03:24:13 +0000 (0:00:00.332) 0:00:07.087 ******** 2026-03-26 03:24:16.546511 | orchestrator | ok: [testbed-node-0] 2026-03-26 03:24:16.546522 | orchestrator | ok: [testbed-node-1] 2026-03-26 03:24:16.546532 | orchestrator | ok: [testbed-node-2] 2026-03-26 03:24:16.546543 | orchestrator | 2026-03-26 03:24:16.546554 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-26 03:24:16.546565 | orchestrator | Thursday 26 March 2026 03:24:13 +0000 (0:00:00.325) 0:00:07.413 ******** 2026-03-26 03:24:16.546576 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:24:16.546587 | orchestrator | 2026-03-26 03:24:16.546607 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-26 03:24:16.546623 | orchestrator | Thursday 26 March 2026 03:24:14 +0000 (0:00:00.148) 0:00:07.562 ******** 2026-03-26 03:24:16.546643 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:24:16.546662 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:24:16.546679 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:24:16.546694 | orchestrator | 2026-03-26 03:24:16.546711 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-26 03:24:16.546727 | orchestrator | Thursday 26 March 2026 03:24:14 +0000 (0:00:00.568) 0:00:08.131 ******** 2026-03-26 03:24:16.546743 | orchestrator | ok: [testbed-node-0] 2026-03-26 03:24:16.546758 | orchestrator | ok: [testbed-node-1] 2026-03-26 03:24:16.546784 | orchestrator | ok: [testbed-node-2] 2026-03-26 03:24:16.546801 | orchestrator | 2026-03-26 03:24:16.546816 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-26 03:24:16.546834 | orchestrator | Thursday 26 March 2026 03:24:15 +0000 (0:00:00.343) 0:00:08.475 ******** 2026-03-26 03:24:16.546851 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:24:16.546867 | orchestrator | 2026-03-26 03:24:16.546883 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-26 03:24:16.546899 | orchestrator | Thursday 26 March 2026 03:24:15 +0000 (0:00:00.143) 0:00:08.618 ******** 2026-03-26 03:24:16.546915 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:24:16.546934 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:24:16.546951 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:24:16.546967 | orchestrator | 2026-03-26 03:24:16.546984 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-26 03:24:16.547001 | orchestrator | Thursday 26 March 2026 03:24:15 +0000 (0:00:00.335) 0:00:08.953 ******** 2026-03-26 03:24:16.547020 | orchestrator | ok: [testbed-node-0] 2026-03-26 03:24:16.547039 | orchestrator | ok: [testbed-node-1] 2026-03-26 03:24:16.547059 | orchestrator | ok: [testbed-node-2] 2026-03-26 03:24:16.547078 | orchestrator | 2026-03-26 03:24:16.547097 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-26 03:24:16.547110 | orchestrator | Thursday 26 March 2026 03:24:15 +0000 (0:00:00.350) 0:00:09.303 ******** 2026-03-26 03:24:16.547121 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:24:16.547132 | orchestrator | 2026-03-26 03:24:16.547143 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-26 03:24:16.547154 | orchestrator | Thursday 26 March 2026 03:24:16 +0000 (0:00:00.334) 0:00:09.638 ******** 2026-03-26 03:24:16.547165 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:24:16.547176 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:24:16.547209 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:24:16.547220 | orchestrator | 2026-03-26 03:24:16.547231 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-26 03:24:16.547254 | orchestrator | Thursday 26 March 2026 03:24:16 +0000 (0:00:00.358) 0:00:09.997 ******** 2026-03-26 03:24:31.802397 | orchestrator | ok: [testbed-node-0] 2026-03-26 03:24:31.802497 | orchestrator | ok: [testbed-node-1] 2026-03-26 03:24:31.802509 | orchestrator | ok: [testbed-node-2] 2026-03-26 03:24:31.802518 | orchestrator | 2026-03-26 03:24:31.802527 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-26 03:24:31.802537 | orchestrator | Thursday 26 March 2026 03:24:16 +0000 (0:00:00.353) 0:00:10.350 ******** 2026-03-26 03:24:31.802545 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:24:31.802555 | orchestrator | 2026-03-26 03:24:31.802564 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-26 03:24:31.802572 | orchestrator | Thursday 26 March 2026 03:24:17 +0000 (0:00:00.165) 0:00:10.516 ******** 2026-03-26 03:24:31.802580 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:24:31.802589 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:24:31.802597 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:24:31.802605 | orchestrator | 2026-03-26 03:24:31.802613 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-26 03:24:31.802645 | orchestrator | Thursday 26 March 2026 03:24:17 +0000 (0:00:00.323) 0:00:10.840 ******** 2026-03-26 03:24:31.802654 | orchestrator | ok: [testbed-node-0] 2026-03-26 03:24:31.802662 | orchestrator | ok: [testbed-node-1] 2026-03-26 03:24:31.802670 | orchestrator | ok: [testbed-node-2] 2026-03-26 03:24:31.802678 | orchestrator | 2026-03-26 03:24:31.802686 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-26 03:24:31.802694 | orchestrator | Thursday 26 March 2026 03:24:17 +0000 (0:00:00.587) 0:00:11.427 ******** 2026-03-26 03:24:31.802702 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:24:31.802710 | orchestrator | 2026-03-26 03:24:31.802719 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-26 03:24:31.802729 | orchestrator | Thursday 26 March 2026 03:24:18 +0000 (0:00:00.153) 0:00:11.581 ******** 2026-03-26 03:24:31.802741 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:24:31.802753 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:24:31.802762 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:24:31.802770 | orchestrator | 2026-03-26 03:24:31.802778 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-26 03:24:31.802786 | orchestrator | Thursday 26 March 2026 03:24:18 +0000 (0:00:00.374) 0:00:11.955 ******** 2026-03-26 03:24:31.802794 | orchestrator | ok: [testbed-node-0] 2026-03-26 03:24:31.802802 | orchestrator | ok: [testbed-node-1] 2026-03-26 03:24:31.802810 | orchestrator | ok: [testbed-node-2] 2026-03-26 03:24:31.802818 | orchestrator | 2026-03-26 03:24:31.802826 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-26 03:24:31.802835 | orchestrator | Thursday 26 March 2026 03:24:18 +0000 (0:00:00.395) 0:00:12.351 ******** 2026-03-26 03:24:31.802843 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:24:31.802851 | orchestrator | 2026-03-26 03:24:31.802859 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-26 03:24:31.802867 | orchestrator | Thursday 26 March 2026 03:24:19 +0000 (0:00:00.129) 0:00:12.480 ******** 2026-03-26 03:24:31.802875 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:24:31.802883 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:24:31.802891 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:24:31.802899 | orchestrator | 2026-03-26 03:24:31.802907 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-26 03:24:31.802915 | orchestrator | Thursday 26 March 2026 03:24:19 +0000 (0:00:00.620) 0:00:13.100 ******** 2026-03-26 03:24:31.802923 | orchestrator | ok: [testbed-node-0] 2026-03-26 03:24:31.802931 | orchestrator | ok: [testbed-node-1] 2026-03-26 03:24:31.802939 | orchestrator | ok: [testbed-node-2] 2026-03-26 03:24:31.802947 | orchestrator | 2026-03-26 03:24:31.802955 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-26 03:24:31.802963 | orchestrator | Thursday 26 March 2026 03:24:20 +0000 (0:00:00.389) 0:00:13.490 ******** 2026-03-26 03:24:31.802971 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:24:31.802979 | orchestrator | 2026-03-26 03:24:31.802987 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-26 03:24:31.802995 | orchestrator | Thursday 26 March 2026 03:24:20 +0000 (0:00:00.156) 0:00:13.647 ******** 2026-03-26 03:24:31.803016 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:24:31.803024 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:24:31.803032 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:24:31.803040 | orchestrator | 2026-03-26 03:24:31.803048 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2026-03-26 03:24:31.803056 | orchestrator | Thursday 26 March 2026 03:24:20 +0000 (0:00:00.340) 0:00:13.987 ******** 2026-03-26 03:24:31.803065 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:24:31.803073 | orchestrator | changed: [testbed-node-2] 2026-03-26 03:24:31.803081 | orchestrator | changed: [testbed-node-1] 2026-03-26 03:24:31.803089 | orchestrator | 2026-03-26 03:24:31.803097 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2026-03-26 03:24:31.803112 | orchestrator | Thursday 26 March 2026 03:24:22 +0000 (0:00:02.095) 0:00:16.082 ******** 2026-03-26 03:24:31.803120 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-03-26 03:24:31.803129 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-03-26 03:24:31.803137 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-03-26 03:24:31.803145 | orchestrator | 2026-03-26 03:24:31.803153 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2026-03-26 03:24:31.803161 | orchestrator | Thursday 26 March 2026 03:24:24 +0000 (0:00:02.088) 0:00:18.171 ******** 2026-03-26 03:24:31.803196 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-03-26 03:24:31.803206 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-03-26 03:24:31.803214 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-03-26 03:24:31.803222 | orchestrator | 2026-03-26 03:24:31.803230 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2026-03-26 03:24:31.803251 | orchestrator | Thursday 26 March 2026 03:24:26 +0000 (0:00:01.882) 0:00:20.053 ******** 2026-03-26 03:24:31.803260 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-03-26 03:24:31.803268 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-03-26 03:24:31.803276 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-03-26 03:24:31.803284 | orchestrator | 2026-03-26 03:24:31.803292 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2026-03-26 03:24:31.803300 | orchestrator | Thursday 26 March 2026 03:24:28 +0000 (0:00:01.562) 0:00:21.616 ******** 2026-03-26 03:24:31.803308 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:24:31.803316 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:24:31.803343 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:24:31.803351 | orchestrator | 2026-03-26 03:24:31.803359 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2026-03-26 03:24:31.803380 | orchestrator | Thursday 26 March 2026 03:24:28 +0000 (0:00:00.566) 0:00:22.182 ******** 2026-03-26 03:24:31.803388 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:24:31.803396 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:24:31.803404 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:24:31.803412 | orchestrator | 2026-03-26 03:24:31.803420 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-26 03:24:31.803439 | orchestrator | Thursday 26 March 2026 03:24:29 +0000 (0:00:00.294) 0:00:22.477 ******** 2026-03-26 03:24:31.803447 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 03:24:31.803455 | orchestrator | 2026-03-26 03:24:31.803463 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2026-03-26 03:24:31.803471 | orchestrator | Thursday 26 March 2026 03:24:29 +0000 (0:00:00.652) 0:00:23.129 ******** 2026-03-26 03:24:31.803490 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-26 03:24:31.803519 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-26 03:24:32.463328 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-26 03:24:32.463459 | orchestrator | 2026-03-26 03:24:32.463473 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2026-03-26 03:24:32.463483 | orchestrator | Thursday 26 March 2026 03:24:31 +0000 (0:00:02.119) 0:00:25.249 ******** 2026-03-26 03:24:32.463509 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-26 03:24:32.463526 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:24:32.463542 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-26 03:24:32.464287 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:24:32.464393 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-26 03:24:35.153627 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:24:35.153720 | orchestrator | 2026-03-26 03:24:35.153732 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2026-03-26 03:24:35.153740 | orchestrator | Thursday 26 March 2026 03:24:32 +0000 (0:00:00.667) 0:00:25.917 ******** 2026-03-26 03:24:35.153766 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-26 03:24:35.153777 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:24:35.153801 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-26 03:24:35.153828 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:24:35.153836 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-26 03:24:35.153843 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:24:35.153850 | orchestrator | 2026-03-26 03:24:35.153856 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2026-03-26 03:24:35.153899 | orchestrator | Thursday 26 March 2026 03:24:33 +0000 (0:00:00.894) 0:00:26.811 ******** 2026-03-26 03:24:35.153916 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-26 03:25:21.309066 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-26 03:25:21.309290 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-26 03:25:21.309311 | orchestrator | 2026-03-26 03:25:21.309325 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-26 03:25:21.309338 | orchestrator | Thursday 26 March 2026 03:24:35 +0000 (0:00:01.794) 0:00:28.606 ******** 2026-03-26 03:25:21.309350 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:25:21.309362 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:25:21.309373 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:25:21.309384 | orchestrator | 2026-03-26 03:25:21.309396 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-26 03:25:21.309407 | orchestrator | Thursday 26 March 2026 03:24:35 +0000 (0:00:00.325) 0:00:28.931 ******** 2026-03-26 03:25:21.309419 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 03:25:21.309430 | orchestrator | 2026-03-26 03:25:21.309441 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2026-03-26 03:25:21.309452 | orchestrator | Thursday 26 March 2026 03:24:36 +0000 (0:00:00.600) 0:00:29.532 ******** 2026-03-26 03:25:21.309464 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:25:21.309475 | orchestrator | 2026-03-26 03:25:21.309486 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2026-03-26 03:25:21.309497 | orchestrator | Thursday 26 March 2026 03:24:38 +0000 (0:00:02.347) 0:00:31.879 ******** 2026-03-26 03:25:21.309508 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:25:21.309518 | orchestrator | 2026-03-26 03:25:21.309529 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2026-03-26 03:25:21.309541 | orchestrator | Thursday 26 March 2026 03:24:41 +0000 (0:00:02.704) 0:00:34.583 ******** 2026-03-26 03:25:21.309552 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:25:21.309563 | orchestrator | 2026-03-26 03:25:21.309582 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-03-26 03:25:21.309593 | orchestrator | Thursday 26 March 2026 03:24:58 +0000 (0:00:17.350) 0:00:51.934 ******** 2026-03-26 03:25:21.309604 | orchestrator | 2026-03-26 03:25:21.309615 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-03-26 03:25:21.309626 | orchestrator | Thursday 26 March 2026 03:24:58 +0000 (0:00:00.087) 0:00:52.022 ******** 2026-03-26 03:25:21.309650 | orchestrator | 2026-03-26 03:25:21.309661 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-03-26 03:25:21.309672 | orchestrator | Thursday 26 March 2026 03:24:58 +0000 (0:00:00.066) 0:00:52.089 ******** 2026-03-26 03:25:21.309683 | orchestrator | 2026-03-26 03:25:21.309695 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2026-03-26 03:25:21.309706 | orchestrator | Thursday 26 March 2026 03:24:58 +0000 (0:00:00.082) 0:00:52.171 ******** 2026-03-26 03:25:21.309717 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:25:21.309727 | orchestrator | changed: [testbed-node-1] 2026-03-26 03:25:21.309738 | orchestrator | changed: [testbed-node-2] 2026-03-26 03:25:21.309749 | orchestrator | 2026-03-26 03:25:21.309760 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-26 03:25:21.309773 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-26 03:25:21.309785 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-03-26 03:25:21.309796 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-03-26 03:25:21.309808 | orchestrator | 2026-03-26 03:25:21.309818 | orchestrator | 2026-03-26 03:25:21.309829 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-26 03:25:21.309840 | orchestrator | Thursday 26 March 2026 03:25:21 +0000 (0:00:22.557) 0:01:14.729 ******** 2026-03-26 03:25:21.309851 | orchestrator | =============================================================================== 2026-03-26 03:25:21.309862 | orchestrator | horizon : Restart horizon container ------------------------------------ 22.56s 2026-03-26 03:25:21.309873 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 17.35s 2026-03-26 03:25:21.309884 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.70s 2026-03-26 03:25:21.309895 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.35s 2026-03-26 03:25:21.309906 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 2.12s 2026-03-26 03:25:21.309923 | orchestrator | horizon : Copying over config.json files for services ------------------- 2.10s 2026-03-26 03:25:21.309935 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 2.09s 2026-03-26 03:25:21.309946 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 1.88s 2026-03-26 03:25:21.309957 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.79s 2026-03-26 03:25:21.309968 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.56s 2026-03-26 03:25:21.309979 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.20s 2026-03-26 03:25:21.309990 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 0.89s 2026-03-26 03:25:21.310001 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.83s 2026-03-26 03:25:21.310080 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.67s 2026-03-26 03:25:21.744464 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.65s 2026-03-26 03:25:21.744563 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.62s 2026-03-26 03:25:21.744577 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.60s 2026-03-26 03:25:21.744615 | orchestrator | horizon : Update policy file name --------------------------------------- 0.59s 2026-03-26 03:25:21.744627 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.57s 2026-03-26 03:25:21.744638 | orchestrator | horizon : Copying over existing policy file ----------------------------- 0.57s 2026-03-26 03:25:24.236775 | orchestrator | 2026-03-26 03:25:24 | INFO  | Task 845c9a2d-6656-4f8a-8038-0ee897175f20 (skyline) was prepared for execution. 2026-03-26 03:25:24.236848 | orchestrator | 2026-03-26 03:25:24 | INFO  | It takes a moment until task 845c9a2d-6656-4f8a-8038-0ee897175f20 (skyline) has been started and output is visible here. 2026-03-26 03:25:55.918636 | orchestrator | 2026-03-26 03:25:55.918772 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-26 03:25:55.918799 | orchestrator | 2026-03-26 03:25:55.918817 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-26 03:25:55.918838 | orchestrator | Thursday 26 March 2026 03:25:28 +0000 (0:00:00.270) 0:00:00.270 ******** 2026-03-26 03:25:55.918858 | orchestrator | ok: [testbed-node-0] 2026-03-26 03:25:55.918878 | orchestrator | ok: [testbed-node-1] 2026-03-26 03:25:55.918898 | orchestrator | ok: [testbed-node-2] 2026-03-26 03:25:55.918920 | orchestrator | 2026-03-26 03:25:55.918940 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-26 03:25:55.918960 | orchestrator | Thursday 26 March 2026 03:25:28 +0000 (0:00:00.313) 0:00:00.584 ******** 2026-03-26 03:25:55.918980 | orchestrator | ok: [testbed-node-0] => (item=enable_skyline_True) 2026-03-26 03:25:55.919000 | orchestrator | ok: [testbed-node-1] => (item=enable_skyline_True) 2026-03-26 03:25:55.919013 | orchestrator | ok: [testbed-node-2] => (item=enable_skyline_True) 2026-03-26 03:25:55.919024 | orchestrator | 2026-03-26 03:25:55.919035 | orchestrator | PLAY [Apply role skyline] ****************************************************** 2026-03-26 03:25:55.919046 | orchestrator | 2026-03-26 03:25:55.919057 | orchestrator | TASK [skyline : include_tasks] ************************************************* 2026-03-26 03:25:55.919068 | orchestrator | Thursday 26 March 2026 03:25:29 +0000 (0:00:00.463) 0:00:01.048 ******** 2026-03-26 03:25:55.919108 | orchestrator | included: /ansible/roles/skyline/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 03:25:55.919123 | orchestrator | 2026-03-26 03:25:55.919134 | orchestrator | TASK [service-ks-register : skyline | Creating services] *********************** 2026-03-26 03:25:55.919145 | orchestrator | Thursday 26 March 2026 03:25:29 +0000 (0:00:00.589) 0:00:01.637 ******** 2026-03-26 03:25:55.919156 | orchestrator | changed: [testbed-node-0] => (item=skyline (panel)) 2026-03-26 03:25:55.919167 | orchestrator | 2026-03-26 03:25:55.919180 | orchestrator | TASK [service-ks-register : skyline | Creating endpoints] ********************** 2026-03-26 03:25:55.919193 | orchestrator | Thursday 26 March 2026 03:25:33 +0000 (0:00:03.454) 0:00:05.092 ******** 2026-03-26 03:25:55.919207 | orchestrator | changed: [testbed-node-0] => (item=skyline -> https://api-int.testbed.osism.xyz:9998 -> internal) 2026-03-26 03:25:55.919220 | orchestrator | changed: [testbed-node-0] => (item=skyline -> https://api.testbed.osism.xyz:9998 -> public) 2026-03-26 03:25:55.919233 | orchestrator | 2026-03-26 03:25:55.919246 | orchestrator | TASK [service-ks-register : skyline | Creating projects] *********************** 2026-03-26 03:25:55.919258 | orchestrator | Thursday 26 March 2026 03:25:40 +0000 (0:00:06.814) 0:00:11.906 ******** 2026-03-26 03:25:55.919270 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-26 03:25:55.919284 | orchestrator | 2026-03-26 03:25:55.919297 | orchestrator | TASK [service-ks-register : skyline | Creating users] ************************** 2026-03-26 03:25:55.919310 | orchestrator | Thursday 26 March 2026 03:25:43 +0000 (0:00:02.984) 0:00:14.891 ******** 2026-03-26 03:25:55.919324 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-26 03:25:55.919336 | orchestrator | changed: [testbed-node-0] => (item=skyline -> service) 2026-03-26 03:25:55.919349 | orchestrator | 2026-03-26 03:25:55.919361 | orchestrator | TASK [service-ks-register : skyline | Creating roles] ************************** 2026-03-26 03:25:55.919403 | orchestrator | Thursday 26 March 2026 03:25:47 +0000 (0:00:04.064) 0:00:18.955 ******** 2026-03-26 03:25:55.919416 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-26 03:25:55.919429 | orchestrator | 2026-03-26 03:25:55.919441 | orchestrator | TASK [service-ks-register : skyline | Granting user roles] ********************* 2026-03-26 03:25:55.919454 | orchestrator | Thursday 26 March 2026 03:25:50 +0000 (0:00:03.407) 0:00:22.363 ******** 2026-03-26 03:25:55.919466 | orchestrator | changed: [testbed-node-0] => (item=skyline -> service -> admin) 2026-03-26 03:25:55.919477 | orchestrator | 2026-03-26 03:25:55.919503 | orchestrator | TASK [skyline : Ensuring config directories exist] ***************************** 2026-03-26 03:25:55.919515 | orchestrator | Thursday 26 March 2026 03:25:54 +0000 (0:00:03.844) 0:00:26.207 ******** 2026-03-26 03:25:55.919530 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-26 03:25:55.919567 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-26 03:25:55.919580 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-26 03:25:55.919592 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-26 03:25:55.919618 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-26 03:25:55.919639 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-26 03:25:59.940993 | orchestrator | 2026-03-26 03:25:59.941132 | orchestrator | TASK [skyline : include_tasks] ************************************************* 2026-03-26 03:25:59.941146 | orchestrator | Thursday 26 March 2026 03:25:55 +0000 (0:00:01.339) 0:00:27.547 ******** 2026-03-26 03:25:59.941154 | orchestrator | included: /ansible/roles/skyline/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 03:25:59.941161 | orchestrator | 2026-03-26 03:25:59.941168 | orchestrator | TASK [service-cert-copy : skyline | Copying over extra CA certificates] ******** 2026-03-26 03:25:59.941174 | orchestrator | Thursday 26 March 2026 03:25:56 +0000 (0:00:00.777) 0:00:28.324 ******** 2026-03-26 03:25:59.941183 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-26 03:25:59.941212 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-26 03:25:59.941232 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-26 03:25:59.941252 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-26 03:25:59.941260 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-26 03:25:59.941267 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-26 03:25:59.941279 | orchestrator | 2026-03-26 03:25:59.941285 | orchestrator | TASK [service-cert-copy : skyline | Copying over backend internal TLS certificate] *** 2026-03-26 03:25:59.941292 | orchestrator | Thursday 26 March 2026 03:25:59 +0000 (0:00:02.460) 0:00:30.785 ******** 2026-03-26 03:25:59.941302 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-26 03:25:59.941309 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-26 03:25:59.941316 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:25:59.941334 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-26 03:26:01.292935 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-26 03:26:01.293137 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:26:01.293178 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-26 03:26:01.293193 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-26 03:26:01.293206 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:26:01.293226 | orchestrator | 2026-03-26 03:26:01.293245 | orchestrator | TASK [service-cert-copy : skyline | Copying over backend internal TLS key] ***** 2026-03-26 03:26:01.293265 | orchestrator | Thursday 26 March 2026 03:25:59 +0000 (0:00:00.790) 0:00:31.576 ******** 2026-03-26 03:26:01.293283 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-26 03:26:01.293341 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-26 03:26:01.293364 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:26:01.293392 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-26 03:26:01.293412 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-26 03:26:01.293424 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:26:01.293435 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-26 03:26:01.293465 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-26 03:26:09.865022 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:26:09.865190 | orchestrator | 2026-03-26 03:26:09.865208 | orchestrator | TASK [skyline : Copying over skyline.yaml files for services] ****************** 2026-03-26 03:26:09.865219 | orchestrator | Thursday 26 March 2026 03:26:01 +0000 (0:00:01.340) 0:00:32.917 ******** 2026-03-26 03:26:09.865247 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-26 03:26:09.865262 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-26 03:26:09.865272 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-26 03:26:09.865299 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-26 03:26:09.865328 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-26 03:26:09.865342 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-26 03:26:09.865352 | orchestrator | 2026-03-26 03:26:09.865362 | orchestrator | TASK [skyline : Copying over gunicorn.py files for services] ******************* 2026-03-26 03:26:09.865371 | orchestrator | Thursday 26 March 2026 03:26:03 +0000 (0:00:02.498) 0:00:35.415 ******** 2026-03-26 03:26:09.865380 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/skyline/templates/gunicorn.py.j2) 2026-03-26 03:26:09.865389 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/skyline/templates/gunicorn.py.j2) 2026-03-26 03:26:09.865397 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/skyline/templates/gunicorn.py.j2) 2026-03-26 03:26:09.865406 | orchestrator | 2026-03-26 03:26:09.865415 | orchestrator | TASK [skyline : Copying over nginx.conf files for services] ******************** 2026-03-26 03:26:09.865424 | orchestrator | Thursday 26 March 2026 03:26:05 +0000 (0:00:01.639) 0:00:37.055 ******** 2026-03-26 03:26:09.865432 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/skyline/templates/nginx.conf.j2) 2026-03-26 03:26:09.865441 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/skyline/templates/nginx.conf.j2) 2026-03-26 03:26:09.865457 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/skyline/templates/nginx.conf.j2) 2026-03-26 03:26:09.865479 | orchestrator | 2026-03-26 03:26:09.865498 | orchestrator | TASK [skyline : Copying over config.json files for services] ******************* 2026-03-26 03:26:09.865507 | orchestrator | Thursday 26 March 2026 03:26:07 +0000 (0:00:02.137) 0:00:39.193 ******** 2026-03-26 03:26:09.865517 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-26 03:26:09.865535 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-26 03:26:12.027244 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-26 03:26:12.027385 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-26 03:26:12.027443 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-26 03:26:12.027467 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-26 03:26:12.027488 | orchestrator | 2026-03-26 03:26:12.027504 | orchestrator | TASK [skyline : Copying over custom logos] ************************************* 2026-03-26 03:26:12.027517 | orchestrator | Thursday 26 March 2026 03:26:09 +0000 (0:00:02.307) 0:00:41.501 ******** 2026-03-26 03:26:12.027528 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:26:12.027540 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:26:12.027551 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:26:12.027562 | orchestrator | 2026-03-26 03:26:12.027592 | orchestrator | TASK [skyline : Check skyline container] *************************************** 2026-03-26 03:26:12.027604 | orchestrator | Thursday 26 March 2026 03:26:10 +0000 (0:00:00.325) 0:00:41.826 ******** 2026-03-26 03:26:12.027638 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-26 03:26:12.027653 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-26 03:26:12.027675 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-26 03:26:12.027689 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-26 03:26:12.027718 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-26 03:26:46.360554 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-26 03:26:46.360685 | orchestrator | 2026-03-26 03:26:46.360700 | orchestrator | TASK [skyline : Creating Skyline database] ************************************* 2026-03-26 03:26:46.360712 | orchestrator | Thursday 26 March 2026 03:26:12 +0000 (0:00:01.836) 0:00:43.663 ******** 2026-03-26 03:26:46.360721 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:26:46.360732 | orchestrator | 2026-03-26 03:26:46.360746 | orchestrator | TASK [skyline : Creating Skyline database user and setting permissions] ******** 2026-03-26 03:26:46.360760 | orchestrator | Thursday 26 March 2026 03:26:14 +0000 (0:00:02.288) 0:00:45.952 ******** 2026-03-26 03:26:46.360769 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:26:46.360778 | orchestrator | 2026-03-26 03:26:46.360787 | orchestrator | TASK [skyline : Running Skyline bootstrap container] *************************** 2026-03-26 03:26:46.360796 | orchestrator | Thursday 26 March 2026 03:26:16 +0000 (0:00:02.406) 0:00:48.358 ******** 2026-03-26 03:26:46.360804 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:26:46.360813 | orchestrator | 2026-03-26 03:26:46.360822 | orchestrator | TASK [skyline : Flush handlers] ************************************************ 2026-03-26 03:26:46.360831 | orchestrator | Thursday 26 March 2026 03:26:24 +0000 (0:00:08.143) 0:00:56.502 ******** 2026-03-26 03:26:46.360840 | orchestrator | 2026-03-26 03:26:46.360849 | orchestrator | TASK [skyline : Flush handlers] ************************************************ 2026-03-26 03:26:46.360858 | orchestrator | Thursday 26 March 2026 03:26:24 +0000 (0:00:00.074) 0:00:56.576 ******** 2026-03-26 03:26:46.360867 | orchestrator | 2026-03-26 03:26:46.360876 | orchestrator | TASK [skyline : Flush handlers] ************************************************ 2026-03-26 03:26:46.360885 | orchestrator | Thursday 26 March 2026 03:26:25 +0000 (0:00:00.071) 0:00:56.648 ******** 2026-03-26 03:26:46.360894 | orchestrator | 2026-03-26 03:26:46.360902 | orchestrator | RUNNING HANDLER [skyline : Restart skyline-apiserver container] **************** 2026-03-26 03:26:46.360911 | orchestrator | Thursday 26 March 2026 03:26:25 +0000 (0:00:00.083) 0:00:56.731 ******** 2026-03-26 03:26:46.360920 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:26:46.360929 | orchestrator | changed: [testbed-node-2] 2026-03-26 03:26:46.360938 | orchestrator | changed: [testbed-node-1] 2026-03-26 03:26:46.360947 | orchestrator | 2026-03-26 03:26:46.360956 | orchestrator | RUNNING HANDLER [skyline : Restart skyline-console container] ****************** 2026-03-26 03:26:46.360965 | orchestrator | Thursday 26 March 2026 03:26:36 +0000 (0:00:11.142) 0:01:07.874 ******** 2026-03-26 03:26:46.360973 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:26:46.360983 | orchestrator | changed: [testbed-node-1] 2026-03-26 03:26:46.360991 | orchestrator | changed: [testbed-node-2] 2026-03-26 03:26:46.361000 | orchestrator | 2026-03-26 03:26:46.361009 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-26 03:26:46.361019 | orchestrator | testbed-node-0 : ok=22  changed=16  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-26 03:26:46.361030 | orchestrator | testbed-node-1 : ok=13  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-26 03:26:46.361065 | orchestrator | testbed-node-2 : ok=13  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-26 03:26:46.361076 | orchestrator | 2026-03-26 03:26:46.361085 | orchestrator | 2026-03-26 03:26:46.361094 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-26 03:26:46.361105 | orchestrator | Thursday 26 March 2026 03:26:46 +0000 (0:00:09.775) 0:01:17.650 ******** 2026-03-26 03:26:46.361115 | orchestrator | =============================================================================== 2026-03-26 03:26:46.361132 | orchestrator | skyline : Restart skyline-apiserver container -------------------------- 11.14s 2026-03-26 03:26:46.361142 | orchestrator | skyline : Restart skyline-console container ----------------------------- 9.78s 2026-03-26 03:26:46.361153 | orchestrator | skyline : Running Skyline bootstrap container --------------------------- 8.14s 2026-03-26 03:26:46.361163 | orchestrator | service-ks-register : skyline | Creating endpoints ---------------------- 6.81s 2026-03-26 03:26:46.361194 | orchestrator | service-ks-register : skyline | Creating users -------------------------- 4.06s 2026-03-26 03:26:46.361205 | orchestrator | service-ks-register : skyline | Granting user roles --------------------- 3.84s 2026-03-26 03:26:46.361215 | orchestrator | service-ks-register : skyline | Creating services ----------------------- 3.45s 2026-03-26 03:26:46.361225 | orchestrator | service-ks-register : skyline | Creating roles -------------------------- 3.41s 2026-03-26 03:26:46.361249 | orchestrator | service-ks-register : skyline | Creating projects ----------------------- 2.98s 2026-03-26 03:26:46.361260 | orchestrator | skyline : Copying over skyline.yaml files for services ------------------ 2.50s 2026-03-26 03:26:46.361270 | orchestrator | service-cert-copy : skyline | Copying over extra CA certificates -------- 2.46s 2026-03-26 03:26:46.361280 | orchestrator | skyline : Creating Skyline database user and setting permissions -------- 2.41s 2026-03-26 03:26:46.361290 | orchestrator | skyline : Copying over config.json files for services ------------------- 2.31s 2026-03-26 03:26:46.361300 | orchestrator | skyline : Creating Skyline database ------------------------------------- 2.29s 2026-03-26 03:26:46.361310 | orchestrator | skyline : Copying over nginx.conf files for services -------------------- 2.14s 2026-03-26 03:26:46.361321 | orchestrator | skyline : Check skyline container --------------------------------------- 1.84s 2026-03-26 03:26:46.361330 | orchestrator | skyline : Copying over gunicorn.py files for services ------------------- 1.64s 2026-03-26 03:26:46.361340 | orchestrator | service-cert-copy : skyline | Copying over backend internal TLS key ----- 1.34s 2026-03-26 03:26:46.361350 | orchestrator | skyline : Ensuring config directories exist ----------------------------- 1.34s 2026-03-26 03:26:46.361360 | orchestrator | service-cert-copy : skyline | Copying over backend internal TLS certificate --- 0.79s 2026-03-26 03:26:48.849682 | orchestrator | 2026-03-26 03:26:48 | INFO  | Task 06ad8f27-7892-4279-9b53-bb6de8a558f5 (glance) was prepared for execution. 2026-03-26 03:26:48.849787 | orchestrator | 2026-03-26 03:26:48 | INFO  | It takes a moment until task 06ad8f27-7892-4279-9b53-bb6de8a558f5 (glance) has been started and output is visible here. 2026-03-26 03:27:24.503132 | orchestrator | 2026-03-26 03:27:24.503252 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-26 03:27:24.503271 | orchestrator | 2026-03-26 03:27:24.503283 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-26 03:27:24.503296 | orchestrator | Thursday 26 March 2026 03:26:53 +0000 (0:00:00.298) 0:00:00.298 ******** 2026-03-26 03:27:24.503307 | orchestrator | ok: [testbed-node-0] 2026-03-26 03:27:24.503320 | orchestrator | ok: [testbed-node-1] 2026-03-26 03:27:24.503331 | orchestrator | ok: [testbed-node-2] 2026-03-26 03:27:24.503343 | orchestrator | 2026-03-26 03:27:24.503354 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-26 03:27:24.503366 | orchestrator | Thursday 26 March 2026 03:26:53 +0000 (0:00:00.436) 0:00:00.735 ******** 2026-03-26 03:27:24.503377 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2026-03-26 03:27:24.503389 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2026-03-26 03:27:24.503400 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2026-03-26 03:27:24.503412 | orchestrator | 2026-03-26 03:27:24.503423 | orchestrator | PLAY [Apply role glance] ******************************************************* 2026-03-26 03:27:24.503434 | orchestrator | 2026-03-26 03:27:24.503445 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-26 03:27:24.503457 | orchestrator | Thursday 26 March 2026 03:26:54 +0000 (0:00:00.489) 0:00:01.225 ******** 2026-03-26 03:27:24.503495 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 03:27:24.503508 | orchestrator | 2026-03-26 03:27:24.503519 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2026-03-26 03:27:24.503530 | orchestrator | Thursday 26 March 2026 03:26:54 +0000 (0:00:00.676) 0:00:01.901 ******** 2026-03-26 03:27:24.503541 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2026-03-26 03:27:24.503552 | orchestrator | 2026-03-26 03:27:24.503563 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2026-03-26 03:27:24.503574 | orchestrator | Thursday 26 March 2026 03:26:58 +0000 (0:00:03.400) 0:00:05.301 ******** 2026-03-26 03:27:24.503585 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2026-03-26 03:27:24.503596 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2026-03-26 03:27:24.503607 | orchestrator | 2026-03-26 03:27:24.503620 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2026-03-26 03:27:24.503632 | orchestrator | Thursday 26 March 2026 03:27:04 +0000 (0:00:06.406) 0:00:11.708 ******** 2026-03-26 03:27:24.503644 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-26 03:27:24.503658 | orchestrator | 2026-03-26 03:27:24.503671 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2026-03-26 03:27:24.503683 | orchestrator | Thursday 26 March 2026 03:27:07 +0000 (0:00:03.290) 0:00:14.999 ******** 2026-03-26 03:27:24.503696 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-26 03:27:24.503711 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2026-03-26 03:27:24.503731 | orchestrator | 2026-03-26 03:27:24.503751 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2026-03-26 03:27:24.503767 | orchestrator | Thursday 26 March 2026 03:27:12 +0000 (0:00:04.347) 0:00:19.346 ******** 2026-03-26 03:27:24.503787 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-26 03:27:24.503802 | orchestrator | 2026-03-26 03:27:24.503821 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2026-03-26 03:27:24.503839 | orchestrator | Thursday 26 March 2026 03:27:15 +0000 (0:00:03.609) 0:00:22.956 ******** 2026-03-26 03:27:24.503888 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2026-03-26 03:27:24.503906 | orchestrator | 2026-03-26 03:27:24.503924 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2026-03-26 03:27:24.503942 | orchestrator | Thursday 26 March 2026 03:27:20 +0000 (0:00:04.187) 0:00:27.144 ******** 2026-03-26 03:27:24.503996 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-26 03:27:24.504104 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-26 03:27:24.504137 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-26 03:27:24.504160 | orchestrator | 2026-03-26 03:27:24.504179 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-26 03:27:24.504198 | orchestrator | Thursday 26 March 2026 03:27:23 +0000 (0:00:03.578) 0:00:30.723 ******** 2026-03-26 03:27:24.504216 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 03:27:24.504255 | orchestrator | 2026-03-26 03:27:24.504288 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2026-03-26 03:27:40.768767 | orchestrator | Thursday 26 March 2026 03:27:24 +0000 (0:00:00.796) 0:00:31.519 ******** 2026-03-26 03:27:40.768877 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:27:40.768904 | orchestrator | changed: [testbed-node-1] 2026-03-26 03:27:40.768916 | orchestrator | changed: [testbed-node-2] 2026-03-26 03:27:40.768926 | orchestrator | 2026-03-26 03:27:40.768937 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2026-03-26 03:27:40.768947 | orchestrator | Thursday 26 March 2026 03:27:28 +0000 (0:00:03.690) 0:00:35.210 ******** 2026-03-26 03:27:40.768958 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-26 03:27:40.768969 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-26 03:27:40.768979 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-26 03:27:40.768989 | orchestrator | 2026-03-26 03:27:40.769081 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2026-03-26 03:27:40.769104 | orchestrator | Thursday 26 March 2026 03:27:29 +0000 (0:00:01.659) 0:00:36.869 ******** 2026-03-26 03:27:40.769118 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-26 03:27:40.769129 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-26 03:27:40.769139 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-26 03:27:40.769149 | orchestrator | 2026-03-26 03:27:40.769159 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2026-03-26 03:27:40.769169 | orchestrator | Thursday 26 March 2026 03:27:31 +0000 (0:00:01.448) 0:00:38.318 ******** 2026-03-26 03:27:40.769179 | orchestrator | ok: [testbed-node-0] 2026-03-26 03:27:40.769190 | orchestrator | ok: [testbed-node-1] 2026-03-26 03:27:40.769200 | orchestrator | ok: [testbed-node-2] 2026-03-26 03:27:40.769209 | orchestrator | 2026-03-26 03:27:40.769219 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2026-03-26 03:27:40.769229 | orchestrator | Thursday 26 March 2026 03:27:32 +0000 (0:00:00.736) 0:00:39.055 ******** 2026-03-26 03:27:40.769239 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:27:40.769248 | orchestrator | 2026-03-26 03:27:40.769259 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2026-03-26 03:27:40.769270 | orchestrator | Thursday 26 March 2026 03:27:32 +0000 (0:00:00.139) 0:00:39.195 ******** 2026-03-26 03:27:40.769281 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:27:40.769293 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:27:40.769305 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:27:40.769316 | orchestrator | 2026-03-26 03:27:40.769327 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-26 03:27:40.769339 | orchestrator | Thursday 26 March 2026 03:27:32 +0000 (0:00:00.336) 0:00:39.531 ******** 2026-03-26 03:27:40.769350 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 03:27:40.769360 | orchestrator | 2026-03-26 03:27:40.769370 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2026-03-26 03:27:40.769380 | orchestrator | Thursday 26 March 2026 03:27:33 +0000 (0:00:00.910) 0:00:40.441 ******** 2026-03-26 03:27:40.769411 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-26 03:27:40.769470 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-26 03:27:40.769489 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-26 03:27:40.769508 | orchestrator | 2026-03-26 03:27:40.769518 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2026-03-26 03:27:40.769528 | orchestrator | Thursday 26 March 2026 03:27:37 +0000 (0:00:04.121) 0:00:44.563 ******** 2026-03-26 03:27:40.769548 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-26 03:27:44.378638 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:27:44.378759 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-26 03:27:44.378802 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:27:44.378817 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-26 03:27:44.378829 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:27:44.378841 | orchestrator | 2026-03-26 03:27:44.378853 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2026-03-26 03:27:44.378865 | orchestrator | Thursday 26 March 2026 03:27:40 +0000 (0:00:03.221) 0:00:47.784 ******** 2026-03-26 03:27:44.378897 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-26 03:27:44.378917 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:27:44.378935 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-26 03:27:44.378947 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:27:44.378968 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-26 03:28:20.490091 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:28:20.490198 | orchestrator | 2026-03-26 03:28:20.490213 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2026-03-26 03:28:20.490225 | orchestrator | Thursday 26 March 2026 03:27:44 +0000 (0:00:03.615) 0:00:51.400 ******** 2026-03-26 03:28:20.490235 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:28:20.490265 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:28:20.490274 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:28:20.490283 | orchestrator | 2026-03-26 03:28:20.490292 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2026-03-26 03:28:20.490301 | orchestrator | Thursday 26 March 2026 03:27:47 +0000 (0:00:03.243) 0:00:54.643 ******** 2026-03-26 03:28:20.490338 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-26 03:28:20.490353 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-26 03:28:20.490388 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-26 03:28:20.490408 | orchestrator | 2026-03-26 03:28:20.490418 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2026-03-26 03:28:20.490427 | orchestrator | Thursday 26 March 2026 03:27:51 +0000 (0:00:04.066) 0:00:58.710 ******** 2026-03-26 03:28:20.490436 | orchestrator | changed: [testbed-node-1] 2026-03-26 03:28:20.490444 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:28:20.490453 | orchestrator | changed: [testbed-node-2] 2026-03-26 03:28:20.490462 | orchestrator | 2026-03-26 03:28:20.490471 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2026-03-26 03:28:20.490480 | orchestrator | Thursday 26 March 2026 03:27:57 +0000 (0:00:05.749) 0:01:04.460 ******** 2026-03-26 03:28:20.490488 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:28:20.490497 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:28:20.490506 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:28:20.490515 | orchestrator | 2026-03-26 03:28:20.490523 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2026-03-26 03:28:20.490532 | orchestrator | Thursday 26 March 2026 03:28:01 +0000 (0:00:03.616) 0:01:08.077 ******** 2026-03-26 03:28:20.490543 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:28:20.490553 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:28:20.490563 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:28:20.490573 | orchestrator | 2026-03-26 03:28:20.490583 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2026-03-26 03:28:20.490592 | orchestrator | Thursday 26 March 2026 03:28:04 +0000 (0:00:03.784) 0:01:11.861 ******** 2026-03-26 03:28:20.490602 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:28:20.490613 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:28:20.490623 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:28:20.490633 | orchestrator | 2026-03-26 03:28:20.490643 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2026-03-26 03:28:20.490652 | orchestrator | Thursday 26 March 2026 03:28:08 +0000 (0:00:03.647) 0:01:15.509 ******** 2026-03-26 03:28:20.490662 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:28:20.490672 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:28:20.490683 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:28:20.490693 | orchestrator | 2026-03-26 03:28:20.490702 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2026-03-26 03:28:20.490712 | orchestrator | Thursday 26 March 2026 03:28:12 +0000 (0:00:03.667) 0:01:19.177 ******** 2026-03-26 03:28:20.490722 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:28:20.490739 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:28:20.490749 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:28:20.490759 | orchestrator | 2026-03-26 03:28:20.490769 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2026-03-26 03:28:20.490779 | orchestrator | Thursday 26 March 2026 03:28:12 +0000 (0:00:00.582) 0:01:19.759 ******** 2026-03-26 03:28:20.490790 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-03-26 03:28:20.490801 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:28:20.490812 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-03-26 03:28:20.490822 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:28:20.490832 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-03-26 03:28:20.490842 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:28:20.490852 | orchestrator | 2026-03-26 03:28:20.490862 | orchestrator | TASK [glance : Generating 'hostnqn' file for glance_api] *********************** 2026-03-26 03:28:20.490872 | orchestrator | Thursday 26 March 2026 03:28:16 +0000 (0:00:03.340) 0:01:23.100 ******** 2026-03-26 03:28:20.490881 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:28:20.490891 | orchestrator | changed: [testbed-node-1] 2026-03-26 03:28:20.490902 | orchestrator | changed: [testbed-node-2] 2026-03-26 03:28:20.490912 | orchestrator | 2026-03-26 03:28:20.490922 | orchestrator | TASK [glance : Check glance containers] **************************************** 2026-03-26 03:28:20.490937 | orchestrator | Thursday 26 March 2026 03:28:20 +0000 (0:00:04.405) 0:01:27.506 ******** 2026-03-26 03:29:37.196598 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-26 03:29:37.196718 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-26 03:29:37.196798 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-26 03:29:37.196815 | orchestrator | 2026-03-26 03:29:37.196828 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-26 03:29:37.196841 | orchestrator | Thursday 26 March 2026 03:28:24 +0000 (0:00:03.983) 0:01:31.490 ******** 2026-03-26 03:29:37.196852 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:29:37.196865 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:29:37.196875 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:29:37.196886 | orchestrator | 2026-03-26 03:29:37.196898 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2026-03-26 03:29:37.196909 | orchestrator | Thursday 26 March 2026 03:28:25 +0000 (0:00:00.565) 0:01:32.056 ******** 2026-03-26 03:29:37.196920 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:29:37.196931 | orchestrator | 2026-03-26 03:29:37.196998 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2026-03-26 03:29:37.197010 | orchestrator | Thursday 26 March 2026 03:28:27 +0000 (0:00:02.210) 0:01:34.267 ******** 2026-03-26 03:29:37.197021 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:29:37.197033 | orchestrator | 2026-03-26 03:29:37.197044 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2026-03-26 03:29:37.197055 | orchestrator | Thursday 26 March 2026 03:28:29 +0000 (0:00:02.446) 0:01:36.714 ******** 2026-03-26 03:29:37.197075 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:29:37.197086 | orchestrator | 2026-03-26 03:29:37.197097 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2026-03-26 03:29:37.197109 | orchestrator | Thursday 26 March 2026 03:28:32 +0000 (0:00:02.423) 0:01:39.137 ******** 2026-03-26 03:29:37.197122 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:29:37.197136 | orchestrator | 2026-03-26 03:29:37.197148 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2026-03-26 03:29:37.197160 | orchestrator | Thursday 26 March 2026 03:29:00 +0000 (0:00:28.855) 0:02:07.993 ******** 2026-03-26 03:29:37.197172 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:29:37.197184 | orchestrator | 2026-03-26 03:29:37.197197 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-03-26 03:29:37.197210 | orchestrator | Thursday 26 March 2026 03:29:03 +0000 (0:00:02.219) 0:02:10.213 ******** 2026-03-26 03:29:37.197222 | orchestrator | 2026-03-26 03:29:37.197235 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-03-26 03:29:37.197247 | orchestrator | Thursday 26 March 2026 03:29:03 +0000 (0:00:00.070) 0:02:10.283 ******** 2026-03-26 03:29:37.197260 | orchestrator | 2026-03-26 03:29:37.197272 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-03-26 03:29:37.197283 | orchestrator | Thursday 26 March 2026 03:29:03 +0000 (0:00:00.071) 0:02:10.355 ******** 2026-03-26 03:29:37.197294 | orchestrator | 2026-03-26 03:29:37.197304 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2026-03-26 03:29:37.197315 | orchestrator | Thursday 26 March 2026 03:29:03 +0000 (0:00:00.076) 0:02:10.432 ******** 2026-03-26 03:29:37.197326 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:29:37.197337 | orchestrator | changed: [testbed-node-1] 2026-03-26 03:29:37.197348 | orchestrator | changed: [testbed-node-2] 2026-03-26 03:29:37.197360 | orchestrator | 2026-03-26 03:29:37.197370 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-26 03:29:37.197383 | orchestrator | testbed-node-0 : ok=27  changed=19  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-26 03:29:37.197395 | orchestrator | testbed-node-1 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-26 03:29:37.197406 | orchestrator | testbed-node-2 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-26 03:29:37.197417 | orchestrator | 2026-03-26 03:29:37.197428 | orchestrator | 2026-03-26 03:29:37.197439 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-26 03:29:37.197450 | orchestrator | Thursday 26 March 2026 03:29:37 +0000 (0:00:33.781) 0:02:44.214 ******** 2026-03-26 03:29:37.197461 | orchestrator | =============================================================================== 2026-03-26 03:29:37.197472 | orchestrator | glance : Restart glance-api container ---------------------------------- 33.78s 2026-03-26 03:29:37.197483 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 28.86s 2026-03-26 03:29:37.197494 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 6.41s 2026-03-26 03:29:37.197513 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 5.75s 2026-03-26 03:29:37.575596 | orchestrator | glance : Generating 'hostnqn' file for glance_api ----------------------- 4.41s 2026-03-26 03:29:37.575694 | orchestrator | service-ks-register : glance | Creating users --------------------------- 4.35s 2026-03-26 03:29:37.575708 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 4.19s 2026-03-26 03:29:37.575719 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 4.12s 2026-03-26 03:29:37.575729 | orchestrator | glance : Copying over config.json files for services -------------------- 4.07s 2026-03-26 03:29:37.575739 | orchestrator | glance : Check glance containers ---------------------------------------- 3.98s 2026-03-26 03:29:37.575785 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 3.78s 2026-03-26 03:29:37.575797 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 3.69s 2026-03-26 03:29:37.575806 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 3.67s 2026-03-26 03:29:37.575816 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 3.65s 2026-03-26 03:29:37.575826 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 3.62s 2026-03-26 03:29:37.575836 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 3.62s 2026-03-26 03:29:37.575846 | orchestrator | service-ks-register : glance | Creating roles --------------------------- 3.61s 2026-03-26 03:29:37.575856 | orchestrator | glance : Ensuring config directories exist ------------------------------ 3.58s 2026-03-26 03:29:37.575866 | orchestrator | service-ks-register : glance | Creating services ------------------------ 3.40s 2026-03-26 03:29:37.575875 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 3.34s 2026-03-26 03:29:39.978915 | orchestrator | 2026-03-26 03:29:39 | INFO  | Task 08eb036f-e85f-4744-96a6-01108fe8d3ed (cinder) was prepared for execution. 2026-03-26 03:29:39.979052 | orchestrator | 2026-03-26 03:29:39 | INFO  | It takes a moment until task 08eb036f-e85f-4744-96a6-01108fe8d3ed (cinder) has been started and output is visible here. 2026-03-26 03:30:15.304976 | orchestrator | 2026-03-26 03:30:15.305070 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-26 03:30:15.305081 | orchestrator | 2026-03-26 03:30:15.305088 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-26 03:30:15.305094 | orchestrator | Thursday 26 March 2026 03:29:44 +0000 (0:00:00.269) 0:00:00.269 ******** 2026-03-26 03:30:15.305100 | orchestrator | ok: [testbed-node-0] 2026-03-26 03:30:15.305107 | orchestrator | ok: [testbed-node-1] 2026-03-26 03:30:15.305113 | orchestrator | ok: [testbed-node-2] 2026-03-26 03:30:15.305119 | orchestrator | 2026-03-26 03:30:15.305125 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-26 03:30:15.305131 | orchestrator | Thursday 26 March 2026 03:29:44 +0000 (0:00:00.342) 0:00:00.612 ******** 2026-03-26 03:30:15.305137 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2026-03-26 03:30:15.305143 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2026-03-26 03:30:15.305153 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2026-03-26 03:30:15.305163 | orchestrator | 2026-03-26 03:30:15.305172 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2026-03-26 03:30:15.305182 | orchestrator | 2026-03-26 03:30:15.305192 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-26 03:30:15.305202 | orchestrator | Thursday 26 March 2026 03:29:45 +0000 (0:00:00.465) 0:00:01.077 ******** 2026-03-26 03:30:15.305211 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 03:30:15.305222 | orchestrator | 2026-03-26 03:30:15.305231 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2026-03-26 03:30:15.305241 | orchestrator | Thursday 26 March 2026 03:29:45 +0000 (0:00:00.589) 0:00:01.667 ******** 2026-03-26 03:30:15.305252 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2026-03-26 03:30:15.305262 | orchestrator | 2026-03-26 03:30:15.305272 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2026-03-26 03:30:15.305284 | orchestrator | Thursday 26 March 2026 03:29:49 +0000 (0:00:03.433) 0:00:05.100 ******** 2026-03-26 03:30:15.305295 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2026-03-26 03:30:15.305308 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2026-03-26 03:30:15.305315 | orchestrator | 2026-03-26 03:30:15.305321 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2026-03-26 03:30:15.305348 | orchestrator | Thursday 26 March 2026 03:29:55 +0000 (0:00:06.462) 0:00:11.563 ******** 2026-03-26 03:30:15.305355 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-26 03:30:15.305361 | orchestrator | 2026-03-26 03:30:15.305367 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2026-03-26 03:30:15.305373 | orchestrator | Thursday 26 March 2026 03:29:58 +0000 (0:00:03.230) 0:00:14.793 ******** 2026-03-26 03:30:15.305381 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-26 03:30:15.305391 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2026-03-26 03:30:15.305401 | orchestrator | 2026-03-26 03:30:15.305411 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2026-03-26 03:30:15.305420 | orchestrator | Thursday 26 March 2026 03:30:02 +0000 (0:00:04.060) 0:00:18.853 ******** 2026-03-26 03:30:15.305430 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-26 03:30:15.305439 | orchestrator | 2026-03-26 03:30:15.305447 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2026-03-26 03:30:15.305456 | orchestrator | Thursday 26 March 2026 03:30:06 +0000 (0:00:03.240) 0:00:22.094 ******** 2026-03-26 03:30:15.305465 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2026-03-26 03:30:15.305474 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2026-03-26 03:30:15.305483 | orchestrator | 2026-03-26 03:30:15.305493 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2026-03-26 03:30:15.305503 | orchestrator | Thursday 26 March 2026 03:30:13 +0000 (0:00:07.226) 0:00:29.321 ******** 2026-03-26 03:30:15.305531 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-26 03:30:15.305564 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-26 03:30:15.305578 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-26 03:30:15.305599 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-26 03:30:15.305611 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-26 03:30:15.305627 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-26 03:30:15.305638 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-26 03:30:15.305657 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-26 03:30:21.287483 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-26 03:30:21.287654 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-26 03:30:21.287681 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-26 03:30:21.287722 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-26 03:30:21.287744 | orchestrator | 2026-03-26 03:30:21.287767 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-26 03:30:21.287789 | orchestrator | Thursday 26 March 2026 03:30:15 +0000 (0:00:02.050) 0:00:31.371 ******** 2026-03-26 03:30:21.287810 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:30:21.287831 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:30:21.287851 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:30:21.287872 | orchestrator | 2026-03-26 03:30:21.287892 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-26 03:30:21.287942 | orchestrator | Thursday 26 March 2026 03:30:15 +0000 (0:00:00.520) 0:00:31.892 ******** 2026-03-26 03:30:21.287963 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 03:30:21.287982 | orchestrator | 2026-03-26 03:30:21.288003 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2026-03-26 03:30:21.288025 | orchestrator | Thursday 26 March 2026 03:30:16 +0000 (0:00:00.544) 0:00:32.437 ******** 2026-03-26 03:30:21.288048 | orchestrator | changed: [testbed-node-0] => (item=cinder-volume) 2026-03-26 03:30:21.288068 | orchestrator | changed: [testbed-node-1] => (item=cinder-volume) 2026-03-26 03:30:21.288090 | orchestrator | changed: [testbed-node-2] => (item=cinder-volume) 2026-03-26 03:30:21.288109 | orchestrator | changed: [testbed-node-0] => (item=cinder-backup) 2026-03-26 03:30:21.288144 | orchestrator | changed: [testbed-node-1] => (item=cinder-backup) 2026-03-26 03:30:21.288181 | orchestrator | changed: [testbed-node-2] => (item=cinder-backup) 2026-03-26 03:30:21.288201 | orchestrator | 2026-03-26 03:30:21.288221 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2026-03-26 03:30:21.288241 | orchestrator | Thursday 26 March 2026 03:30:18 +0000 (0:00:01.624) 0:00:34.061 ******** 2026-03-26 03:30:21.288288 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-26 03:30:21.288312 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-26 03:30:21.288340 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-26 03:30:21.288360 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-26 03:30:21.288439 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-26 03:30:32.329753 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-26 03:30:32.329872 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-26 03:30:32.329949 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-26 03:30:32.329964 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-26 03:30:32.329977 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-26 03:30:32.330089 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-26 03:30:32.330107 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-26 03:30:32.330119 | orchestrator | 2026-03-26 03:30:32.330132 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2026-03-26 03:30:32.330145 | orchestrator | Thursday 26 March 2026 03:30:21 +0000 (0:00:03.500) 0:00:37.561 ******** 2026-03-26 03:30:32.330157 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-03-26 03:30:32.330169 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-03-26 03:30:32.330180 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-03-26 03:30:32.330191 | orchestrator | 2026-03-26 03:30:32.330203 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2026-03-26 03:30:32.330214 | orchestrator | Thursday 26 March 2026 03:30:23 +0000 (0:00:01.630) 0:00:39.192 ******** 2026-03-26 03:30:32.330226 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder.keyring) 2026-03-26 03:30:32.330237 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder.keyring) 2026-03-26 03:30:32.330248 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder.keyring) 2026-03-26 03:30:32.330259 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder-backup.keyring) 2026-03-26 03:30:32.330278 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder-backup.keyring) 2026-03-26 03:30:32.330291 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder-backup.keyring) 2026-03-26 03:30:32.330304 | orchestrator | 2026-03-26 03:30:32.330316 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2026-03-26 03:30:32.330328 | orchestrator | Thursday 26 March 2026 03:30:25 +0000 (0:00:02.668) 0:00:41.860 ******** 2026-03-26 03:30:32.330342 | orchestrator | ok: [testbed-node-0] => (item=cinder-volume) 2026-03-26 03:30:32.330355 | orchestrator | ok: [testbed-node-1] => (item=cinder-volume) 2026-03-26 03:30:32.330387 | orchestrator | ok: [testbed-node-2] => (item=cinder-volume) 2026-03-26 03:30:32.330406 | orchestrator | ok: [testbed-node-0] => (item=cinder-backup) 2026-03-26 03:30:32.330423 | orchestrator | ok: [testbed-node-1] => (item=cinder-backup) 2026-03-26 03:30:32.330441 | orchestrator | ok: [testbed-node-2] => (item=cinder-backup) 2026-03-26 03:30:32.330459 | orchestrator | 2026-03-26 03:30:32.330479 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2026-03-26 03:30:32.330498 | orchestrator | Thursday 26 March 2026 03:30:26 +0000 (0:00:01.061) 0:00:42.921 ******** 2026-03-26 03:30:32.330518 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:30:32.330537 | orchestrator | 2026-03-26 03:30:32.330556 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2026-03-26 03:30:32.330570 | orchestrator | Thursday 26 March 2026 03:30:27 +0000 (0:00:00.167) 0:00:43.089 ******** 2026-03-26 03:30:32.330583 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:30:32.330595 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:30:32.330607 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:30:32.330618 | orchestrator | 2026-03-26 03:30:32.330629 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-26 03:30:32.330641 | orchestrator | Thursday 26 March 2026 03:30:27 +0000 (0:00:00.510) 0:00:43.599 ******** 2026-03-26 03:30:32.330652 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 03:30:32.330664 | orchestrator | 2026-03-26 03:30:32.330675 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2026-03-26 03:30:32.330686 | orchestrator | Thursday 26 March 2026 03:30:28 +0000 (0:00:00.604) 0:00:44.204 ******** 2026-03-26 03:30:32.330708 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-26 03:30:33.305174 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-26 03:30:33.305321 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-26 03:30:33.305362 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-26 03:30:33.305377 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-26 03:30:33.305389 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-26 03:30:33.305421 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-26 03:30:33.305434 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-26 03:30:33.305451 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-26 03:30:33.305475 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-26 03:30:33.305497 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-26 03:30:33.305516 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-26 03:30:33.305536 | orchestrator | 2026-03-26 03:30:33.305556 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2026-03-26 03:30:33.305575 | orchestrator | Thursday 26 March 2026 03:30:32 +0000 (0:00:04.200) 0:00:48.405 ******** 2026-03-26 03:30:33.305606 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-26 03:30:33.419814 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-26 03:30:33.419973 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-26 03:30:33.419987 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-26 03:30:33.419994 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:30:33.420003 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-26 03:30:33.420011 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-26 03:30:33.420032 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-26 03:30:33.420048 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-26 03:30:33.420055 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:30:33.420062 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-26 03:30:33.420069 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-26 03:30:33.420075 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-26 03:30:33.420082 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-26 03:30:33.420093 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:30:33.420100 | orchestrator | 2026-03-26 03:30:33.420107 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2026-03-26 03:30:33.420119 | orchestrator | Thursday 26 March 2026 03:30:33 +0000 (0:00:00.986) 0:00:49.391 ******** 2026-03-26 03:30:34.026790 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-26 03:30:34.026896 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-26 03:30:34.026992 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-26 03:30:34.027009 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-26 03:30:34.027022 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:30:34.027036 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-26 03:30:34.027100 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-26 03:30:34.027123 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-26 03:30:34.027136 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-26 03:30:34.027148 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:30:34.027160 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-26 03:30:34.027172 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-26 03:30:34.027202 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-26 03:30:38.816005 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-26 03:30:38.816090 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:30:38.816101 | orchestrator | 2026-03-26 03:30:38.816121 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2026-03-26 03:30:38.816129 | orchestrator | Thursday 26 March 2026 03:30:34 +0000 (0:00:00.966) 0:00:50.357 ******** 2026-03-26 03:30:38.816137 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-26 03:30:38.816146 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-26 03:30:38.816153 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-26 03:30:38.816188 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-26 03:30:38.816198 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-26 03:30:38.816209 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-26 03:30:38.816216 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-26 03:30:38.816223 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-26 03:30:38.816230 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-26 03:30:38.816245 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-26 03:30:52.025406 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-26 03:30:52.025544 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-26 03:30:52.025575 | orchestrator | 2026-03-26 03:30:52.025598 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2026-03-26 03:30:52.025621 | orchestrator | Thursday 26 March 2026 03:30:38 +0000 (0:00:04.517) 0:00:54.875 ******** 2026-03-26 03:30:52.025641 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-03-26 03:30:52.025662 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-03-26 03:30:52.025681 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-03-26 03:30:52.025700 | orchestrator | 2026-03-26 03:30:52.025720 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2026-03-26 03:30:52.025741 | orchestrator | Thursday 26 March 2026 03:30:40 +0000 (0:00:01.957) 0:00:56.833 ******** 2026-03-26 03:30:52.025762 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-26 03:30:52.025814 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-26 03:30:52.025859 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-26 03:30:52.025874 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-26 03:30:52.025886 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-26 03:30:52.025898 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-26 03:30:52.025944 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-26 03:30:52.025958 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-26 03:30:52.025982 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-26 03:30:54.544723 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-26 03:30:54.544829 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-26 03:30:54.544844 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-26 03:30:54.544881 | orchestrator | 2026-03-26 03:30:54.544895 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2026-03-26 03:30:54.545000 | orchestrator | Thursday 26 March 2026 03:30:52 +0000 (0:00:11.264) 0:01:08.098 ******** 2026-03-26 03:30:54.545013 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:30:54.545026 | orchestrator | changed: [testbed-node-2] 2026-03-26 03:30:54.545037 | orchestrator | changed: [testbed-node-1] 2026-03-26 03:30:54.545048 | orchestrator | 2026-03-26 03:30:54.545060 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2026-03-26 03:30:54.545071 | orchestrator | Thursday 26 March 2026 03:30:53 +0000 (0:00:01.566) 0:01:09.664 ******** 2026-03-26 03:30:54.545084 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-26 03:30:54.545098 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-26 03:30:54.545136 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-26 03:30:54.545150 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-26 03:30:54.545172 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:30:54.545184 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-26 03:30:54.545196 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-26 03:30:54.545208 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-26 03:30:54.545234 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-26 03:30:58.143234 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:30:58.143342 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-26 03:30:58.143384 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-26 03:30:58.143393 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-26 03:30:58.143401 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-26 03:30:58.143408 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:30:58.143415 | orchestrator | 2026-03-26 03:30:58.143423 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2026-03-26 03:30:58.143430 | orchestrator | Thursday 26 March 2026 03:30:54 +0000 (0:00:00.950) 0:01:10.615 ******** 2026-03-26 03:30:58.143437 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:30:58.143443 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:30:58.143449 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:30:58.143455 | orchestrator | 2026-03-26 03:30:58.143462 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2026-03-26 03:30:58.143468 | orchestrator | Thursday 26 March 2026 03:30:55 +0000 (0:00:00.618) 0:01:11.233 ******** 2026-03-26 03:30:58.143501 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-26 03:30:58.143516 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-26 03:30:58.143523 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-26 03:30:58.143530 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-26 03:30:58.143537 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-26 03:30:58.143547 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-26 03:30:58.143560 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-26 03:32:37.309056 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-26 03:32:37.309189 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-26 03:32:37.309212 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-26 03:32:37.309223 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-26 03:32:37.309247 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-26 03:32:37.309278 | orchestrator | 2026-03-26 03:32:37.309289 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-26 03:32:37.309299 | orchestrator | Thursday 26 March 2026 03:30:58 +0000 (0:00:02.975) 0:01:14.208 ******** 2026-03-26 03:32:37.309307 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:32:37.309317 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:32:37.309324 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:32:37.309332 | orchestrator | 2026-03-26 03:32:37.309341 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2026-03-26 03:32:37.309349 | orchestrator | Thursday 26 March 2026 03:30:58 +0000 (0:00:00.307) 0:01:14.516 ******** 2026-03-26 03:32:37.309357 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:32:37.309365 | orchestrator | 2026-03-26 03:32:37.309389 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2026-03-26 03:32:37.309398 | orchestrator | Thursday 26 March 2026 03:31:00 +0000 (0:00:02.141) 0:01:16.657 ******** 2026-03-26 03:32:37.309406 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:32:37.309414 | orchestrator | 2026-03-26 03:32:37.309422 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2026-03-26 03:32:37.309430 | orchestrator | Thursday 26 March 2026 03:31:02 +0000 (0:00:02.182) 0:01:18.840 ******** 2026-03-26 03:32:37.309438 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:32:37.309446 | orchestrator | 2026-03-26 03:32:37.309454 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-03-26 03:32:37.309462 | orchestrator | Thursday 26 March 2026 03:31:23 +0000 (0:00:20.199) 0:01:39.039 ******** 2026-03-26 03:32:37.309469 | orchestrator | 2026-03-26 03:32:37.309477 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-03-26 03:32:37.309485 | orchestrator | Thursday 26 March 2026 03:31:23 +0000 (0:00:00.070) 0:01:39.110 ******** 2026-03-26 03:32:37.309493 | orchestrator | 2026-03-26 03:32:37.309501 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-03-26 03:32:37.309509 | orchestrator | Thursday 26 March 2026 03:31:23 +0000 (0:00:00.076) 0:01:39.187 ******** 2026-03-26 03:32:37.309516 | orchestrator | 2026-03-26 03:32:37.309524 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2026-03-26 03:32:37.309532 | orchestrator | Thursday 26 March 2026 03:31:23 +0000 (0:00:00.074) 0:01:39.261 ******** 2026-03-26 03:32:37.309540 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:32:37.309548 | orchestrator | changed: [testbed-node-1] 2026-03-26 03:32:37.309556 | orchestrator | changed: [testbed-node-2] 2026-03-26 03:32:37.309565 | orchestrator | 2026-03-26 03:32:37.309574 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2026-03-26 03:32:37.309584 | orchestrator | Thursday 26 March 2026 03:31:49 +0000 (0:00:26.597) 0:02:05.859 ******** 2026-03-26 03:32:37.309592 | orchestrator | changed: [testbed-node-1] 2026-03-26 03:32:37.309601 | orchestrator | changed: [testbed-node-2] 2026-03-26 03:32:37.309610 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:32:37.309619 | orchestrator | 2026-03-26 03:32:37.309629 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2026-03-26 03:32:37.309638 | orchestrator | Thursday 26 March 2026 03:31:58 +0000 (0:00:08.377) 0:02:14.237 ******** 2026-03-26 03:32:37.309647 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:32:37.309656 | orchestrator | changed: [testbed-node-1] 2026-03-26 03:32:37.309665 | orchestrator | changed: [testbed-node-2] 2026-03-26 03:32:37.309674 | orchestrator | 2026-03-26 03:32:37.309683 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2026-03-26 03:32:37.309692 | orchestrator | Thursday 26 March 2026 03:32:25 +0000 (0:00:27.727) 0:02:41.964 ******** 2026-03-26 03:32:37.309701 | orchestrator | changed: [testbed-node-1] 2026-03-26 03:32:37.309710 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:32:37.309719 | orchestrator | changed: [testbed-node-2] 2026-03-26 03:32:37.309738 | orchestrator | 2026-03-26 03:32:37.309748 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2026-03-26 03:32:37.309758 | orchestrator | Thursday 26 March 2026 03:32:37 +0000 (0:00:11.022) 0:02:52.986 ******** 2026-03-26 03:32:37.309767 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:32:37.309776 | orchestrator | 2026-03-26 03:32:37.309785 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-26 03:32:37.309795 | orchestrator | testbed-node-0 : ok=30  changed=22  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-26 03:32:37.309806 | orchestrator | testbed-node-1 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-26 03:32:37.309815 | orchestrator | testbed-node-2 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-26 03:32:37.309824 | orchestrator | 2026-03-26 03:32:37.309834 | orchestrator | 2026-03-26 03:32:37.309845 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-26 03:32:37.309881 | orchestrator | Thursday 26 March 2026 03:32:37 +0000 (0:00:00.284) 0:02:53.271 ******** 2026-03-26 03:32:37.309897 | orchestrator | =============================================================================== 2026-03-26 03:32:37.309911 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 27.73s 2026-03-26 03:32:37.309925 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 26.60s 2026-03-26 03:32:37.309938 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 20.20s 2026-03-26 03:32:37.309950 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 11.26s 2026-03-26 03:32:37.309971 | orchestrator | cinder : Restart cinder-backup container ------------------------------- 11.02s 2026-03-26 03:32:37.309986 | orchestrator | cinder : Restart cinder-scheduler container ----------------------------- 8.38s 2026-03-26 03:32:37.310000 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 7.23s 2026-03-26 03:32:37.310014 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 6.46s 2026-03-26 03:32:37.310103 | orchestrator | cinder : Copying over config.json files for services -------------------- 4.52s 2026-03-26 03:32:37.310118 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 4.20s 2026-03-26 03:32:37.310131 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 4.06s 2026-03-26 03:32:37.310144 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 3.50s 2026-03-26 03:32:37.310157 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.43s 2026-03-26 03:32:37.310165 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.24s 2026-03-26 03:32:37.310183 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.23s 2026-03-26 03:32:37.739552 | orchestrator | cinder : Check cinder containers ---------------------------------------- 2.98s 2026-03-26 03:32:37.739653 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 2.67s 2026-03-26 03:32:37.739668 | orchestrator | cinder : Creating Cinder database user and setting permissions ---------- 2.18s 2026-03-26 03:32:37.739680 | orchestrator | cinder : Creating Cinder database --------------------------------------- 2.14s 2026-03-26 03:32:37.739691 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 2.05s 2026-03-26 03:32:40.211083 | orchestrator | 2026-03-26 03:32:40 | INFO  | Task 68ea24c0-1a9f-4835-83c3-be9f58b464fd (barbican) was prepared for execution. 2026-03-26 03:32:40.211204 | orchestrator | 2026-03-26 03:32:40 | INFO  | It takes a moment until task 68ea24c0-1a9f-4835-83c3-be9f58b464fd (barbican) has been started and output is visible here. 2026-03-26 03:33:24.950990 | orchestrator | 2026-03-26 03:33:24.951135 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-26 03:33:24.951196 | orchestrator | 2026-03-26 03:33:24.951218 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-26 03:33:24.951237 | orchestrator | Thursday 26 March 2026 03:32:44 +0000 (0:00:00.286) 0:00:00.286 ******** 2026-03-26 03:33:24.951256 | orchestrator | ok: [testbed-node-0] 2026-03-26 03:33:24.951276 | orchestrator | ok: [testbed-node-1] 2026-03-26 03:33:24.951294 | orchestrator | ok: [testbed-node-2] 2026-03-26 03:33:24.951306 | orchestrator | 2026-03-26 03:33:24.951318 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-26 03:33:24.951329 | orchestrator | Thursday 26 March 2026 03:32:44 +0000 (0:00:00.332) 0:00:00.618 ******** 2026-03-26 03:33:24.951340 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2026-03-26 03:33:24.951352 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2026-03-26 03:33:24.951363 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2026-03-26 03:33:24.951374 | orchestrator | 2026-03-26 03:33:24.951385 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2026-03-26 03:33:24.951396 | orchestrator | 2026-03-26 03:33:24.951407 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-03-26 03:33:24.951418 | orchestrator | Thursday 26 March 2026 03:32:45 +0000 (0:00:00.496) 0:00:01.114 ******** 2026-03-26 03:33:24.951430 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 03:33:24.951442 | orchestrator | 2026-03-26 03:33:24.951453 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2026-03-26 03:33:24.951464 | orchestrator | Thursday 26 March 2026 03:32:45 +0000 (0:00:00.582) 0:00:01.697 ******** 2026-03-26 03:33:24.951478 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2026-03-26 03:33:24.951496 | orchestrator | 2026-03-26 03:33:24.951524 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2026-03-26 03:33:24.951546 | orchestrator | Thursday 26 March 2026 03:32:49 +0000 (0:00:03.493) 0:00:05.190 ******** 2026-03-26 03:33:24.951564 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2026-03-26 03:33:24.951582 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2026-03-26 03:33:24.951600 | orchestrator | 2026-03-26 03:33:24.951617 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2026-03-26 03:33:24.951635 | orchestrator | Thursday 26 March 2026 03:32:55 +0000 (0:00:06.389) 0:00:11.580 ******** 2026-03-26 03:33:24.951654 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-26 03:33:24.951672 | orchestrator | 2026-03-26 03:33:24.951691 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2026-03-26 03:33:24.951709 | orchestrator | Thursday 26 March 2026 03:32:59 +0000 (0:00:03.211) 0:00:14.791 ******** 2026-03-26 03:33:24.951727 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-26 03:33:24.951745 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2026-03-26 03:33:24.951764 | orchestrator | 2026-03-26 03:33:24.951781 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2026-03-26 03:33:24.951800 | orchestrator | Thursday 26 March 2026 03:33:03 +0000 (0:00:04.371) 0:00:19.162 ******** 2026-03-26 03:33:24.951818 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-26 03:33:24.951836 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2026-03-26 03:33:24.951888 | orchestrator | changed: [testbed-node-0] => (item=creator) 2026-03-26 03:33:24.951927 | orchestrator | changed: [testbed-node-0] => (item=observer) 2026-03-26 03:33:24.951946 | orchestrator | changed: [testbed-node-0] => (item=audit) 2026-03-26 03:33:24.951963 | orchestrator | 2026-03-26 03:33:24.951981 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2026-03-26 03:33:24.952000 | orchestrator | Thursday 26 March 2026 03:33:19 +0000 (0:00:16.079) 0:00:35.242 ******** 2026-03-26 03:33:24.952037 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2026-03-26 03:33:24.952057 | orchestrator | 2026-03-26 03:33:24.952075 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2026-03-26 03:33:24.952092 | orchestrator | Thursday 26 March 2026 03:33:23 +0000 (0:00:03.806) 0:00:39.049 ******** 2026-03-26 03:33:24.952117 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-26 03:33:24.952172 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-26 03:33:24.952188 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-26 03:33:24.952201 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-26 03:33:24.952223 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-26 03:33:24.952245 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-26 03:33:24.952266 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-26 03:33:30.934436 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-26 03:33:30.934545 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-26 03:33:30.934561 | orchestrator | 2026-03-26 03:33:30.934575 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2026-03-26 03:33:30.934588 | orchestrator | Thursday 26 March 2026 03:33:24 +0000 (0:00:01.624) 0:00:40.673 ******** 2026-03-26 03:33:30.934600 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2026-03-26 03:33:30.934611 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2026-03-26 03:33:30.934622 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2026-03-26 03:33:30.934633 | orchestrator | 2026-03-26 03:33:30.934644 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2026-03-26 03:33:30.934655 | orchestrator | Thursday 26 March 2026 03:33:26 +0000 (0:00:01.236) 0:00:41.910 ******** 2026-03-26 03:33:30.934688 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:33:30.934712 | orchestrator | 2026-03-26 03:33:30.934723 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2026-03-26 03:33:30.934760 | orchestrator | Thursday 26 March 2026 03:33:26 +0000 (0:00:00.368) 0:00:42.278 ******** 2026-03-26 03:33:30.934772 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:33:30.934783 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:33:30.934794 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:33:30.934805 | orchestrator | 2026-03-26 03:33:30.934816 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-03-26 03:33:30.934827 | orchestrator | Thursday 26 March 2026 03:33:26 +0000 (0:00:00.326) 0:00:42.604 ******** 2026-03-26 03:33:30.934878 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 03:33:30.934891 | orchestrator | 2026-03-26 03:33:30.934902 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2026-03-26 03:33:30.934913 | orchestrator | Thursday 26 March 2026 03:33:27 +0000 (0:00:00.581) 0:00:43.185 ******** 2026-03-26 03:33:30.934926 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-26 03:33:30.934956 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-26 03:33:30.934969 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-26 03:33:30.934983 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-26 03:33:30.935014 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-26 03:33:30.935028 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-26 03:33:30.935041 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-26 03:33:30.935063 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-26 03:33:32.333595 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-26 03:33:32.333721 | orchestrator | 2026-03-26 03:33:32.333750 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2026-03-26 03:33:32.333771 | orchestrator | Thursday 26 March 2026 03:33:30 +0000 (0:00:03.472) 0:00:46.658 ******** 2026-03-26 03:33:32.333828 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-26 03:33:32.333971 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-26 03:33:32.333999 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-26 03:33:32.334103 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:33:32.334134 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-26 03:33:32.334187 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-26 03:33:32.334210 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-26 03:33:32.334252 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:33:32.334287 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-26 03:33:32.334310 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-26 03:33:32.334332 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-26 03:33:32.334352 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:33:32.334371 | orchestrator | 2026-03-26 03:33:32.334388 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2026-03-26 03:33:32.334402 | orchestrator | Thursday 26 March 2026 03:33:31 +0000 (0:00:00.584) 0:00:47.242 ******** 2026-03-26 03:33:32.334428 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-26 03:33:35.955068 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-26 03:33:35.955193 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-26 03:33:35.955218 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:33:35.955259 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-26 03:33:35.955272 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-26 03:33:35.955288 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-26 03:33:35.955303 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:33:35.955377 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-26 03:33:35.955424 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-26 03:33:35.955449 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-26 03:33:35.955465 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:33:35.955481 | orchestrator | 2026-03-26 03:33:35.955496 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2026-03-26 03:33:35.955513 | orchestrator | Thursday 26 March 2026 03:33:32 +0000 (0:00:00.828) 0:00:48.070 ******** 2026-03-26 03:33:35.955528 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-26 03:33:35.955545 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-26 03:33:35.955586 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-26 03:33:46.000306 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-26 03:33:46.000460 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-26 03:33:46.000489 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-26 03:33:46.000509 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-26 03:33:46.000530 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-26 03:33:46.000576 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-26 03:33:46.000597 | orchestrator | 2026-03-26 03:33:46.000615 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2026-03-26 03:33:46.000635 | orchestrator | Thursday 26 March 2026 03:33:35 +0000 (0:00:03.616) 0:00:51.687 ******** 2026-03-26 03:33:46.000652 | orchestrator | changed: [testbed-node-1] 2026-03-26 03:33:46.000670 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:33:46.000687 | orchestrator | changed: [testbed-node-2] 2026-03-26 03:33:46.000704 | orchestrator | 2026-03-26 03:33:46.000745 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2026-03-26 03:33:46.000765 | orchestrator | Thursday 26 March 2026 03:33:37 +0000 (0:00:01.625) 0:00:53.313 ******** 2026-03-26 03:33:46.000782 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-26 03:33:46.000799 | orchestrator | 2026-03-26 03:33:46.000815 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2026-03-26 03:33:46.000833 | orchestrator | Thursday 26 March 2026 03:33:38 +0000 (0:00:01.031) 0:00:54.345 ******** 2026-03-26 03:33:46.000880 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:33:46.000897 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:33:46.000914 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:33:46.000931 | orchestrator | 2026-03-26 03:33:46.000948 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2026-03-26 03:33:46.000966 | orchestrator | Thursday 26 March 2026 03:33:39 +0000 (0:00:00.592) 0:00:54.938 ******** 2026-03-26 03:33:46.001111 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-26 03:33:46.001149 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-26 03:33:46.001184 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-26 03:33:46.001220 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-26 03:33:46.854055 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-26 03:33:46.854156 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-26 03:33:46.854164 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-26 03:33:46.854183 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-26 03:33:46.854187 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-26 03:33:46.854229 | orchestrator | 2026-03-26 03:33:46.854235 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2026-03-26 03:33:46.854241 | orchestrator | Thursday 26 March 2026 03:33:45 +0000 (0:00:06.794) 0:01:01.732 ******** 2026-03-26 03:33:46.854256 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-26 03:33:46.854264 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-26 03:33:46.854269 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-26 03:33:46.854273 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:33:46.854278 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-26 03:33:46.854292 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-26 03:33:46.854296 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-26 03:33:46.854300 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:33:46.854308 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-26 03:33:49.260490 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-26 03:33:49.260617 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-26 03:33:49.260671 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:33:49.260693 | orchestrator | 2026-03-26 03:33:49.260711 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2026-03-26 03:33:49.260730 | orchestrator | Thursday 26 March 2026 03:33:46 +0000 (0:00:00.856) 0:01:02.588 ******** 2026-03-26 03:33:49.260748 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-26 03:33:49.260768 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-26 03:33:49.260804 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-26 03:33:49.260824 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-26 03:33:49.260874 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-26 03:33:49.260887 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-26 03:33:49.260897 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-26 03:33:49.260907 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-26 03:33:49.260918 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-26 03:33:49.260933 | orchestrator | 2026-03-26 03:33:49.260950 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-03-26 03:33:49.260975 | orchestrator | Thursday 26 March 2026 03:33:49 +0000 (0:00:02.397) 0:01:04.986 ******** 2026-03-26 03:34:33.980888 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:34:33.980998 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:34:33.981013 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:34:33.981025 | orchestrator | 2026-03-26 03:34:33.981053 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2026-03-26 03:34:33.981085 | orchestrator | Thursday 26 March 2026 03:33:49 +0000 (0:00:00.328) 0:01:05.314 ******** 2026-03-26 03:34:33.981097 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:34:33.981107 | orchestrator | 2026-03-26 03:34:33.981117 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2026-03-26 03:34:33.981127 | orchestrator | Thursday 26 March 2026 03:33:51 +0000 (0:00:02.031) 0:01:07.345 ******** 2026-03-26 03:34:33.981137 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:34:33.981147 | orchestrator | 2026-03-26 03:34:33.981157 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2026-03-26 03:34:33.981167 | orchestrator | Thursday 26 March 2026 03:33:53 +0000 (0:00:02.220) 0:01:09.566 ******** 2026-03-26 03:34:33.981177 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:34:33.981187 | orchestrator | 2026-03-26 03:34:33.981197 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-03-26 03:34:33.981207 | orchestrator | Thursday 26 March 2026 03:34:06 +0000 (0:00:12.503) 0:01:22.069 ******** 2026-03-26 03:34:33.981216 | orchestrator | 2026-03-26 03:34:33.981226 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-03-26 03:34:33.981236 | orchestrator | Thursday 26 March 2026 03:34:06 +0000 (0:00:00.073) 0:01:22.142 ******** 2026-03-26 03:34:33.981246 | orchestrator | 2026-03-26 03:34:33.981256 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-03-26 03:34:33.981266 | orchestrator | Thursday 26 March 2026 03:34:06 +0000 (0:00:00.083) 0:01:22.226 ******** 2026-03-26 03:34:33.981276 | orchestrator | 2026-03-26 03:34:33.981286 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2026-03-26 03:34:33.981296 | orchestrator | Thursday 26 March 2026 03:34:06 +0000 (0:00:00.082) 0:01:22.309 ******** 2026-03-26 03:34:33.981306 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:34:33.981315 | orchestrator | changed: [testbed-node-2] 2026-03-26 03:34:33.981325 | orchestrator | changed: [testbed-node-1] 2026-03-26 03:34:33.981335 | orchestrator | 2026-03-26 03:34:33.981345 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2026-03-26 03:34:33.981355 | orchestrator | Thursday 26 March 2026 03:34:18 +0000 (0:00:11.526) 0:01:33.835 ******** 2026-03-26 03:34:33.981365 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:34:33.981376 | orchestrator | changed: [testbed-node-2] 2026-03-26 03:34:33.981388 | orchestrator | changed: [testbed-node-1] 2026-03-26 03:34:33.981399 | orchestrator | 2026-03-26 03:34:33.981411 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2026-03-26 03:34:33.981422 | orchestrator | Thursday 26 March 2026 03:34:23 +0000 (0:00:05.071) 0:01:38.907 ******** 2026-03-26 03:34:33.981436 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:34:33.981452 | orchestrator | changed: [testbed-node-1] 2026-03-26 03:34:33.981468 | orchestrator | changed: [testbed-node-2] 2026-03-26 03:34:33.981483 | orchestrator | 2026-03-26 03:34:33.981500 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-26 03:34:33.981518 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-26 03:34:33.981535 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-26 03:34:33.981550 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-26 03:34:33.981566 | orchestrator | 2026-03-26 03:34:33.981581 | orchestrator | 2026-03-26 03:34:33.981596 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-26 03:34:33.981612 | orchestrator | Thursday 26 March 2026 03:34:33 +0000 (0:00:10.439) 0:01:49.346 ******** 2026-03-26 03:34:33.981627 | orchestrator | =============================================================================== 2026-03-26 03:34:33.981643 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 16.08s 2026-03-26 03:34:33.981673 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 12.50s 2026-03-26 03:34:33.981691 | orchestrator | barbican : Restart barbican-api container ------------------------------ 11.53s 2026-03-26 03:34:33.981709 | orchestrator | barbican : Restart barbican-worker container --------------------------- 10.44s 2026-03-26 03:34:33.981726 | orchestrator | barbican : Copying over barbican.conf ----------------------------------- 6.79s 2026-03-26 03:34:33.981743 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.39s 2026-03-26 03:34:33.981761 | orchestrator | barbican : Restart barbican-keystone-listener container ----------------- 5.07s 2026-03-26 03:34:33.981778 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 4.37s 2026-03-26 03:34:33.981795 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 3.81s 2026-03-26 03:34:33.981806 | orchestrator | barbican : Copying over config.json files for services ------------------ 3.62s 2026-03-26 03:34:33.981816 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.49s 2026-03-26 03:34:33.981855 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.47s 2026-03-26 03:34:33.981866 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.21s 2026-03-26 03:34:33.981876 | orchestrator | barbican : Check barbican containers ------------------------------------ 2.40s 2026-03-26 03:34:33.981886 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.22s 2026-03-26 03:34:33.981916 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.03s 2026-03-26 03:34:33.981927 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 1.63s 2026-03-26 03:34:33.981944 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 1.62s 2026-03-26 03:34:33.981955 | orchestrator | barbican : Ensuring vassals config directories exist -------------------- 1.24s 2026-03-26 03:34:33.981965 | orchestrator | barbican : Checking whether barbican-api-paste.ini file exists ---------- 1.03s 2026-03-26 03:34:36.520451 | orchestrator | 2026-03-26 03:34:36 | INFO  | Task 9fd4a647-2541-4c2c-a7fe-e47d95f2c814 (designate) was prepared for execution. 2026-03-26 03:34:36.520547 | orchestrator | 2026-03-26 03:34:36 | INFO  | It takes a moment until task 9fd4a647-2541-4c2c-a7fe-e47d95f2c814 (designate) has been started and output is visible here. 2026-03-26 03:35:09.113159 | orchestrator | 2026-03-26 03:35:09.113238 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-26 03:35:09.113245 | orchestrator | 2026-03-26 03:35:09.113250 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-26 03:35:09.113265 | orchestrator | Thursday 26 March 2026 03:34:40 +0000 (0:00:00.295) 0:00:00.295 ******** 2026-03-26 03:35:09.113273 | orchestrator | ok: [testbed-node-0] 2026-03-26 03:35:09.113280 | orchestrator | ok: [testbed-node-1] 2026-03-26 03:35:09.113287 | orchestrator | ok: [testbed-node-2] 2026-03-26 03:35:09.113293 | orchestrator | 2026-03-26 03:35:09.113299 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-26 03:35:09.113305 | orchestrator | Thursday 26 March 2026 03:34:41 +0000 (0:00:00.332) 0:00:00.628 ******** 2026-03-26 03:35:09.113313 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2026-03-26 03:35:09.113320 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2026-03-26 03:35:09.113327 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2026-03-26 03:35:09.113333 | orchestrator | 2026-03-26 03:35:09.113340 | orchestrator | PLAY [Apply role designate] **************************************************** 2026-03-26 03:35:09.113347 | orchestrator | 2026-03-26 03:35:09.113354 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-26 03:35:09.113359 | orchestrator | Thursday 26 March 2026 03:34:41 +0000 (0:00:00.485) 0:00:01.113 ******** 2026-03-26 03:35:09.113364 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 03:35:09.113384 | orchestrator | 2026-03-26 03:35:09.113388 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2026-03-26 03:35:09.113392 | orchestrator | Thursday 26 March 2026 03:34:42 +0000 (0:00:00.570) 0:00:01.684 ******** 2026-03-26 03:35:09.113396 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2026-03-26 03:35:09.113400 | orchestrator | 2026-03-26 03:35:09.113404 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2026-03-26 03:35:09.113408 | orchestrator | Thursday 26 March 2026 03:34:45 +0000 (0:00:03.465) 0:00:05.149 ******** 2026-03-26 03:35:09.113412 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2026-03-26 03:35:09.113417 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2026-03-26 03:35:09.113421 | orchestrator | 2026-03-26 03:35:09.113424 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2026-03-26 03:35:09.113428 | orchestrator | Thursday 26 March 2026 03:34:52 +0000 (0:00:06.464) 0:00:11.614 ******** 2026-03-26 03:35:09.113432 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-26 03:35:09.113436 | orchestrator | 2026-03-26 03:35:09.113440 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2026-03-26 03:35:09.113444 | orchestrator | Thursday 26 March 2026 03:34:55 +0000 (0:00:03.235) 0:00:14.850 ******** 2026-03-26 03:35:09.113448 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-26 03:35:09.113452 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2026-03-26 03:35:09.113456 | orchestrator | 2026-03-26 03:35:09.113459 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2026-03-26 03:35:09.113463 | orchestrator | Thursday 26 March 2026 03:34:59 +0000 (0:00:04.052) 0:00:18.902 ******** 2026-03-26 03:35:09.113467 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-26 03:35:09.113471 | orchestrator | 2026-03-26 03:35:09.113475 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2026-03-26 03:35:09.113480 | orchestrator | Thursday 26 March 2026 03:35:02 +0000 (0:00:03.286) 0:00:22.188 ******** 2026-03-26 03:35:09.113483 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2026-03-26 03:35:09.113487 | orchestrator | 2026-03-26 03:35:09.113491 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2026-03-26 03:35:09.113495 | orchestrator | Thursday 26 March 2026 03:35:06 +0000 (0:00:03.969) 0:00:26.158 ******** 2026-03-26 03:35:09.113512 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-26 03:35:09.113532 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-26 03:35:09.113541 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-26 03:35:09.113546 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-26 03:35:09.113552 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-26 03:35:09.113556 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-26 03:35:09.113564 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-26 03:35:09.113573 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-26 03:35:15.353698 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-26 03:35:15.353778 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-26 03:35:15.353787 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-26 03:35:15.353793 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-26 03:35:15.353797 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-26 03:35:15.353851 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-26 03:35:15.353881 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-26 03:35:15.353886 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-26 03:35:15.353891 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-26 03:35:15.353896 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-26 03:35:15.353902 | orchestrator | 2026-03-26 03:35:15.353907 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2026-03-26 03:35:15.353914 | orchestrator | Thursday 26 March 2026 03:35:09 +0000 (0:00:02.998) 0:00:29.157 ******** 2026-03-26 03:35:15.353919 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:35:15.353925 | orchestrator | 2026-03-26 03:35:15.353930 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2026-03-26 03:35:15.353935 | orchestrator | Thursday 26 March 2026 03:35:10 +0000 (0:00:00.146) 0:00:29.304 ******** 2026-03-26 03:35:15.353939 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:35:15.353944 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:35:15.353949 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:35:15.353954 | orchestrator | 2026-03-26 03:35:15.353958 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-26 03:35:15.353963 | orchestrator | Thursday 26 March 2026 03:35:10 +0000 (0:00:00.584) 0:00:29.888 ******** 2026-03-26 03:35:15.353968 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 03:35:15.353974 | orchestrator | 2026-03-26 03:35:15.353978 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2026-03-26 03:35:15.353989 | orchestrator | Thursday 26 March 2026 03:35:11 +0000 (0:00:00.574) 0:00:30.462 ******** 2026-03-26 03:35:15.353999 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-26 03:35:15.354009 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-26 03:35:17.214502 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-26 03:35:17.214601 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-26 03:35:17.214615 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-26 03:35:17.214670 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-26 03:35:17.214678 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-26 03:35:17.214701 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-26 03:35:17.214709 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-26 03:35:17.214716 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-26 03:35:17.214725 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-26 03:35:17.214733 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-26 03:35:17.214749 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-26 03:35:17.214757 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-26 03:35:17.214772 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-26 03:35:18.154679 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-26 03:35:18.154769 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-26 03:35:18.154778 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-26 03:35:18.154805 | orchestrator | 2026-03-26 03:35:18.154854 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2026-03-26 03:35:18.154863 | orchestrator | Thursday 26 March 2026 03:35:17 +0000 (0:00:06.039) 0:00:36.501 ******** 2026-03-26 03:35:18.154887 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-26 03:35:18.154895 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-26 03:35:18.154918 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-26 03:35:18.154925 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-26 03:35:18.154932 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-26 03:35:18.154944 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-26 03:35:18.154951 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:35:18.154962 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-26 03:35:18.154969 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-26 03:35:18.154976 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-26 03:35:18.154987 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-26 03:35:19.113067 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-26 03:35:19.113172 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-26 03:35:19.113187 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:35:19.113214 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-26 03:35:19.113224 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-26 03:35:19.113232 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-26 03:35:19.113241 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-26 03:35:19.113264 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-26 03:35:19.113279 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-26 03:35:19.113287 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:35:19.113295 | orchestrator | 2026-03-26 03:35:19.113304 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2026-03-26 03:35:19.113314 | orchestrator | Thursday 26 March 2026 03:35:18 +0000 (0:00:01.058) 0:00:37.560 ******** 2026-03-26 03:35:19.113327 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-26 03:35:19.113335 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-26 03:35:19.113342 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-26 03:35:19.113357 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-26 03:35:19.486964 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-26 03:35:19.487073 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-26 03:35:19.487085 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:35:19.487107 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-26 03:35:19.487851 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-26 03:35:19.487889 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-26 03:35:19.487900 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-26 03:35:19.487948 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-26 03:35:19.487956 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-26 03:35:19.487963 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:35:19.487977 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-26 03:35:19.487984 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-26 03:35:19.487990 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-26 03:35:19.487996 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-26 03:35:19.488012 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-26 03:35:23.893465 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-26 03:35:23.893550 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:35:23.893560 | orchestrator | 2026-03-26 03:35:23.893568 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2026-03-26 03:35:23.893575 | orchestrator | Thursday 26 March 2026 03:35:19 +0000 (0:00:01.214) 0:00:38.774 ******** 2026-03-26 03:35:23.893594 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-26 03:35:23.893602 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-26 03:35:23.893609 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-26 03:35:23.893639 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-26 03:35:23.893647 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-26 03:35:23.893657 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-26 03:35:23.893663 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-26 03:35:23.893671 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-26 03:35:23.893677 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-26 03:35:23.893688 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-26 03:35:23.893701 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-26 03:35:35.886332 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-26 03:35:35.886494 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-26 03:35:35.886523 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-26 03:35:35.886543 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-26 03:35:35.886594 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-26 03:35:35.886613 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-26 03:35:35.886658 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-26 03:35:35.886680 | orchestrator | 2026-03-26 03:35:35.886702 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2026-03-26 03:35:35.886723 | orchestrator | Thursday 26 March 2026 03:35:25 +0000 (0:00:06.202) 0:00:44.976 ******** 2026-03-26 03:35:35.886757 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-26 03:35:35.886778 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-26 03:35:35.886858 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-26 03:35:35.886884 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-26 03:35:35.886923 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-26 03:35:44.513020 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-26 03:35:44.513194 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-26 03:35:44.513205 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-26 03:35:44.513226 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-26 03:35:44.513232 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-26 03:35:44.513237 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-26 03:35:44.513253 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-26 03:35:44.513260 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-26 03:35:44.513265 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-26 03:35:44.513269 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-26 03:35:44.513277 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-26 03:35:44.513281 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-26 03:35:44.513285 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-26 03:35:44.513289 | orchestrator | 2026-03-26 03:35:44.513294 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2026-03-26 03:35:44.513299 | orchestrator | Thursday 26 March 2026 03:35:40 +0000 (0:00:15.070) 0:01:00.046 ******** 2026-03-26 03:35:44.513307 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-03-26 03:35:49.131682 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-03-26 03:35:49.131796 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-03-26 03:35:49.131839 | orchestrator | 2026-03-26 03:35:49.131853 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2026-03-26 03:35:49.131864 | orchestrator | Thursday 26 March 2026 03:35:44 +0000 (0:00:03.756) 0:01:03.802 ******** 2026-03-26 03:35:49.131877 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-03-26 03:35:49.131889 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-03-26 03:35:49.131900 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-03-26 03:35:49.131913 | orchestrator | 2026-03-26 03:35:49.131936 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2026-03-26 03:35:49.131969 | orchestrator | Thursday 26 March 2026 03:35:47 +0000 (0:00:02.632) 0:01:06.435 ******** 2026-03-26 03:35:49.131988 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-26 03:35:49.132041 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-26 03:35:49.132059 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-26 03:35:49.132093 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-26 03:35:49.132108 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-26 03:35:49.132126 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-26 03:35:49.132151 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-26 03:35:49.132164 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-26 03:35:49.132176 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-26 03:35:49.132189 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-26 03:35:49.132212 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-26 03:35:52.010514 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-26 03:35:52.010679 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-26 03:35:52.010696 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-26 03:35:52.010708 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-26 03:35:52.010718 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-26 03:35:52.010728 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-26 03:35:52.010755 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-26 03:35:52.010773 | orchestrator | 2026-03-26 03:35:52.010784 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2026-03-26 03:35:52.010795 | orchestrator | Thursday 26 March 2026 03:35:50 +0000 (0:00:03.033) 0:01:09.469 ******** 2026-03-26 03:35:52.010861 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-26 03:35:52.010874 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-26 03:35:52.010883 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-26 03:35:52.010893 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-26 03:35:52.010912 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-26 03:35:53.029281 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-26 03:35:53.029448 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-26 03:35:53.029465 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-26 03:35:53.029479 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-26 03:35:53.029490 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-26 03:35:53.029500 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-26 03:35:53.029557 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-26 03:35:53.029575 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-26 03:35:53.029592 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-26 03:35:53.029611 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-26 03:35:53.029628 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-26 03:35:53.029647 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-26 03:35:53.029665 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-26 03:35:53.029694 | orchestrator | 2026-03-26 03:35:53.029714 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-26 03:35:53.029745 | orchestrator | Thursday 26 March 2026 03:35:53 +0000 (0:00:02.845) 0:01:12.314 ******** 2026-03-26 03:35:54.028403 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:35:54.028537 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:35:54.028548 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:35:54.028555 | orchestrator | 2026-03-26 03:35:54.028563 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2026-03-26 03:35:54.028572 | orchestrator | Thursday 26 March 2026 03:35:53 +0000 (0:00:00.331) 0:01:12.646 ******** 2026-03-26 03:35:54.028603 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-26 03:35:54.028614 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-26 03:35:54.028622 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-26 03:35:54.028630 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-26 03:35:54.028685 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-26 03:35:54.028729 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-26 03:35:54.028738 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:35:54.028749 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-26 03:35:54.028757 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-26 03:35:54.028764 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-26 03:35:54.028771 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-26 03:35:54.028786 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-26 03:35:54.028800 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-26 03:35:57.420084 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:35:57.420205 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-26 03:35:57.420222 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-26 03:35:57.420234 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-26 03:35:57.420245 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-26 03:35:57.420273 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-26 03:35:57.420283 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-26 03:35:57.420293 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:35:57.420303 | orchestrator | 2026-03-26 03:35:57.420328 | orchestrator | TASK [designate : Check designate containers] ********************************** 2026-03-26 03:35:57.420339 | orchestrator | Thursday 26 March 2026 03:35:54 +0000 (0:00:00.800) 0:01:13.447 ******** 2026-03-26 03:35:57.420353 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-26 03:35:57.420364 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-26 03:35:57.420374 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-26 03:35:57.420390 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-26 03:35:57.420405 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-26 03:35:59.216392 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-26 03:35:59.216488 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-26 03:35:59.216501 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-26 03:35:59.216510 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-26 03:35:59.216538 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-26 03:35:59.216548 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-26 03:35:59.216597 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-26 03:35:59.216608 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-26 03:35:59.216617 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-26 03:35:59.216626 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-26 03:35:59.216642 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-26 03:35:59.216651 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-26 03:35:59.216660 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-26 03:35:59.216669 | orchestrator | 2026-03-26 03:35:59.216679 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-26 03:35:59.216690 | orchestrator | Thursday 26 March 2026 03:35:58 +0000 (0:00:04.719) 0:01:18.166 ******** 2026-03-26 03:35:59.216698 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:35:59.216712 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:37:15.683260 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:37:15.683376 | orchestrator | 2026-03-26 03:37:15.683400 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2026-03-26 03:37:15.683436 | orchestrator | Thursday 26 March 2026 03:35:59 +0000 (0:00:00.339) 0:01:18.506 ******** 2026-03-26 03:37:15.683455 | orchestrator | changed: [testbed-node-0] => (item=designate) 2026-03-26 03:37:15.683472 | orchestrator | 2026-03-26 03:37:15.683487 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2026-03-26 03:37:15.683503 | orchestrator | Thursday 26 March 2026 03:36:01 +0000 (0:00:02.148) 0:01:20.655 ******** 2026-03-26 03:37:15.683519 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-26 03:37:15.683534 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2026-03-26 03:37:15.683549 | orchestrator | 2026-03-26 03:37:15.683566 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2026-03-26 03:37:15.683582 | orchestrator | Thursday 26 March 2026 03:36:03 +0000 (0:00:02.342) 0:01:22.997 ******** 2026-03-26 03:37:15.683598 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:37:15.683615 | orchestrator | 2026-03-26 03:37:15.683632 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-03-26 03:37:15.683649 | orchestrator | Thursday 26 March 2026 03:36:19 +0000 (0:00:16.271) 0:01:39.269 ******** 2026-03-26 03:37:15.683665 | orchestrator | 2026-03-26 03:37:15.683683 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-03-26 03:37:15.683700 | orchestrator | Thursday 26 March 2026 03:36:20 +0000 (0:00:00.073) 0:01:39.342 ******** 2026-03-26 03:37:15.683717 | orchestrator | 2026-03-26 03:37:15.683759 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-03-26 03:37:15.683875 | orchestrator | Thursday 26 March 2026 03:36:20 +0000 (0:00:00.074) 0:01:39.417 ******** 2026-03-26 03:37:15.683892 | orchestrator | 2026-03-26 03:37:15.683904 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2026-03-26 03:37:15.683915 | orchestrator | Thursday 26 March 2026 03:36:20 +0000 (0:00:00.072) 0:01:39.489 ******** 2026-03-26 03:37:15.683927 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:37:15.683939 | orchestrator | changed: [testbed-node-2] 2026-03-26 03:37:15.683950 | orchestrator | changed: [testbed-node-1] 2026-03-26 03:37:15.683961 | orchestrator | 2026-03-26 03:37:15.683973 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2026-03-26 03:37:15.683984 | orchestrator | Thursday 26 March 2026 03:36:28 +0000 (0:00:07.943) 0:01:47.432 ******** 2026-03-26 03:37:15.683995 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:37:15.684006 | orchestrator | changed: [testbed-node-1] 2026-03-26 03:37:15.684017 | orchestrator | changed: [testbed-node-2] 2026-03-26 03:37:15.684028 | orchestrator | 2026-03-26 03:37:15.684039 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2026-03-26 03:37:15.684051 | orchestrator | Thursday 26 March 2026 03:36:38 +0000 (0:00:10.718) 0:01:58.151 ******** 2026-03-26 03:37:15.684063 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:37:15.684074 | orchestrator | changed: [testbed-node-1] 2026-03-26 03:37:15.684085 | orchestrator | changed: [testbed-node-2] 2026-03-26 03:37:15.684096 | orchestrator | 2026-03-26 03:37:15.684107 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2026-03-26 03:37:15.684119 | orchestrator | Thursday 26 March 2026 03:36:44 +0000 (0:00:05.939) 0:02:04.091 ******** 2026-03-26 03:37:15.684130 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:37:15.684141 | orchestrator | changed: [testbed-node-2] 2026-03-26 03:37:15.684152 | orchestrator | changed: [testbed-node-1] 2026-03-26 03:37:15.684163 | orchestrator | 2026-03-26 03:37:15.684174 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2026-03-26 03:37:15.684186 | orchestrator | Thursday 26 March 2026 03:36:50 +0000 (0:00:05.941) 0:02:10.032 ******** 2026-03-26 03:37:15.684197 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:37:15.684208 | orchestrator | changed: [testbed-node-1] 2026-03-26 03:37:15.684220 | orchestrator | changed: [testbed-node-2] 2026-03-26 03:37:15.684230 | orchestrator | 2026-03-26 03:37:15.684242 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2026-03-26 03:37:15.684253 | orchestrator | Thursday 26 March 2026 03:36:56 +0000 (0:00:06.071) 0:02:16.104 ******** 2026-03-26 03:37:15.684264 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:37:15.684275 | orchestrator | changed: [testbed-node-2] 2026-03-26 03:37:15.684286 | orchestrator | changed: [testbed-node-1] 2026-03-26 03:37:15.684297 | orchestrator | 2026-03-26 03:37:15.684308 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2026-03-26 03:37:15.684320 | orchestrator | Thursday 26 March 2026 03:37:07 +0000 (0:00:11.022) 0:02:27.127 ******** 2026-03-26 03:37:15.684331 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:37:15.684342 | orchestrator | 2026-03-26 03:37:15.684353 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-26 03:37:15.684366 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-26 03:37:15.684378 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-26 03:37:15.684387 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-26 03:37:15.684397 | orchestrator | 2026-03-26 03:37:15.684407 | orchestrator | 2026-03-26 03:37:15.684417 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-26 03:37:15.684436 | orchestrator | Thursday 26 March 2026 03:37:15 +0000 (0:00:07.390) 0:02:34.517 ******** 2026-03-26 03:37:15.684446 | orchestrator | =============================================================================== 2026-03-26 03:37:15.684456 | orchestrator | designate : Running Designate bootstrap container ---------------------- 16.27s 2026-03-26 03:37:15.684466 | orchestrator | designate : Copying over designate.conf -------------------------------- 15.07s 2026-03-26 03:37:15.684494 | orchestrator | designate : Restart designate-worker container ------------------------- 11.02s 2026-03-26 03:37:15.684505 | orchestrator | designate : Restart designate-api container ---------------------------- 10.72s 2026-03-26 03:37:15.684522 | orchestrator | designate : Restart designate-backend-bind9 container ------------------- 7.94s 2026-03-26 03:37:15.684533 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 7.39s 2026-03-26 03:37:15.684543 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 6.46s 2026-03-26 03:37:15.684552 | orchestrator | designate : Copying over config.json files for services ----------------- 6.20s 2026-03-26 03:37:15.684562 | orchestrator | designate : Restart designate-mdns container ---------------------------- 6.07s 2026-03-26 03:37:15.684572 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 6.04s 2026-03-26 03:37:15.684582 | orchestrator | designate : Restart designate-producer container ------------------------ 5.94s 2026-03-26 03:37:15.684591 | orchestrator | designate : Restart designate-central container ------------------------- 5.94s 2026-03-26 03:37:15.684601 | orchestrator | designate : Check designate containers ---------------------------------- 4.72s 2026-03-26 03:37:15.684611 | orchestrator | service-ks-register : designate | Creating users ------------------------ 4.05s 2026-03-26 03:37:15.684621 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 3.97s 2026-03-26 03:37:15.684630 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 3.76s 2026-03-26 03:37:15.684640 | orchestrator | service-ks-register : designate | Creating services --------------------- 3.47s 2026-03-26 03:37:15.684649 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 3.29s 2026-03-26 03:37:15.684659 | orchestrator | service-ks-register : designate | Creating projects --------------------- 3.24s 2026-03-26 03:37:15.684669 | orchestrator | designate : Copying over rndc.conf -------------------------------------- 3.03s 2026-03-26 03:37:18.278473 | orchestrator | 2026-03-26 03:37:18 | INFO  | Task 435223f9-b996-4394-82a5-4bcb72d6599b (octavia) was prepared for execution. 2026-03-26 03:37:18.278572 | orchestrator | 2026-03-26 03:37:18 | INFO  | It takes a moment until task 435223f9-b996-4394-82a5-4bcb72d6599b (octavia) has been started and output is visible here. 2026-03-26 03:39:26.652947 | orchestrator | 2026-03-26 03:39:26.653073 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-26 03:39:26.653095 | orchestrator | 2026-03-26 03:39:26.653105 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-26 03:39:26.653114 | orchestrator | Thursday 26 March 2026 03:37:22 +0000 (0:00:00.271) 0:00:00.271 ******** 2026-03-26 03:39:26.653123 | orchestrator | ok: [testbed-node-0] 2026-03-26 03:39:26.653132 | orchestrator | ok: [testbed-node-1] 2026-03-26 03:39:26.653140 | orchestrator | ok: [testbed-node-2] 2026-03-26 03:39:26.653148 | orchestrator | 2026-03-26 03:39:26.653157 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-26 03:39:26.653165 | orchestrator | Thursday 26 March 2026 03:37:23 +0000 (0:00:00.307) 0:00:00.579 ******** 2026-03-26 03:39:26.653173 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2026-03-26 03:39:26.653182 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2026-03-26 03:39:26.653191 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2026-03-26 03:39:26.653199 | orchestrator | 2026-03-26 03:39:26.653208 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2026-03-26 03:39:26.653216 | orchestrator | 2026-03-26 03:39:26.653225 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-26 03:39:26.653254 | orchestrator | Thursday 26 March 2026 03:37:23 +0000 (0:00:00.461) 0:00:01.041 ******** 2026-03-26 03:39:26.653263 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 03:39:26.653272 | orchestrator | 2026-03-26 03:39:26.653280 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2026-03-26 03:39:26.653289 | orchestrator | Thursday 26 March 2026 03:37:24 +0000 (0:00:00.579) 0:00:01.621 ******** 2026-03-26 03:39:26.653298 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2026-03-26 03:39:26.653306 | orchestrator | 2026-03-26 03:39:26.653314 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2026-03-26 03:39:26.653322 | orchestrator | Thursday 26 March 2026 03:37:27 +0000 (0:00:03.307) 0:00:04.928 ******** 2026-03-26 03:39:26.653330 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2026-03-26 03:39:26.653338 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2026-03-26 03:39:26.653346 | orchestrator | 2026-03-26 03:39:26.653354 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2026-03-26 03:39:26.653362 | orchestrator | Thursday 26 March 2026 03:37:34 +0000 (0:00:06.535) 0:00:11.464 ******** 2026-03-26 03:39:26.653370 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-26 03:39:26.653378 | orchestrator | 2026-03-26 03:39:26.653386 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2026-03-26 03:39:26.653395 | orchestrator | Thursday 26 March 2026 03:37:37 +0000 (0:00:03.320) 0:00:14.785 ******** 2026-03-26 03:39:26.653403 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-26 03:39:26.653411 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-03-26 03:39:26.653419 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-03-26 03:39:26.653427 | orchestrator | 2026-03-26 03:39:26.653435 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2026-03-26 03:39:26.653443 | orchestrator | Thursday 26 March 2026 03:37:45 +0000 (0:00:08.315) 0:00:23.100 ******** 2026-03-26 03:39:26.653452 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-26 03:39:26.653460 | orchestrator | 2026-03-26 03:39:26.653468 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2026-03-26 03:39:26.653489 | orchestrator | Thursday 26 March 2026 03:37:49 +0000 (0:00:03.289) 0:00:26.390 ******** 2026-03-26 03:39:26.653497 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2026-03-26 03:39:26.653505 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2026-03-26 03:39:26.653513 | orchestrator | 2026-03-26 03:39:26.653521 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2026-03-26 03:39:26.653529 | orchestrator | Thursday 26 March 2026 03:37:56 +0000 (0:00:07.620) 0:00:34.010 ******** 2026-03-26 03:39:26.653537 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2026-03-26 03:39:26.653545 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2026-03-26 03:39:26.653553 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2026-03-26 03:39:26.653561 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2026-03-26 03:39:26.653569 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2026-03-26 03:39:26.653577 | orchestrator | 2026-03-26 03:39:26.653585 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-26 03:39:26.653593 | orchestrator | Thursday 26 March 2026 03:38:12 +0000 (0:00:16.050) 0:00:50.060 ******** 2026-03-26 03:39:26.653603 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 03:39:26.653616 | orchestrator | 2026-03-26 03:39:26.653629 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2026-03-26 03:39:26.653651 | orchestrator | Thursday 26 March 2026 03:38:13 +0000 (0:00:00.780) 0:00:50.840 ******** 2026-03-26 03:39:26.653664 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:39:26.653678 | orchestrator | 2026-03-26 03:39:26.653691 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2026-03-26 03:39:26.653705 | orchestrator | Thursday 26 March 2026 03:38:18 +0000 (0:00:05.038) 0:00:55.879 ******** 2026-03-26 03:39:26.653719 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:39:26.653732 | orchestrator | 2026-03-26 03:39:26.653745 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-03-26 03:39:26.653805 | orchestrator | Thursday 26 March 2026 03:38:23 +0000 (0:00:04.454) 0:01:00.333 ******** 2026-03-26 03:39:26.653820 | orchestrator | ok: [testbed-node-0] 2026-03-26 03:39:26.653833 | orchestrator | 2026-03-26 03:39:26.653847 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2026-03-26 03:39:26.653860 | orchestrator | Thursday 26 March 2026 03:38:26 +0000 (0:00:03.281) 0:01:03.615 ******** 2026-03-26 03:39:26.653874 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-03-26 03:39:26.653887 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-03-26 03:39:26.653900 | orchestrator | 2026-03-26 03:39:26.653913 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2026-03-26 03:39:26.653925 | orchestrator | Thursday 26 March 2026 03:38:36 +0000 (0:00:10.054) 0:01:13.669 ******** 2026-03-26 03:39:26.653940 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2026-03-26 03:39:26.653955 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2026-03-26 03:39:26.653969 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2026-03-26 03:39:26.653984 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2026-03-26 03:39:26.653997 | orchestrator | 2026-03-26 03:39:26.654011 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2026-03-26 03:39:26.654092 | orchestrator | Thursday 26 March 2026 03:38:53 +0000 (0:00:16.922) 0:01:30.592 ******** 2026-03-26 03:39:26.654110 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:39:26.654124 | orchestrator | 2026-03-26 03:39:26.654137 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2026-03-26 03:39:26.654150 | orchestrator | Thursday 26 March 2026 03:38:57 +0000 (0:00:04.607) 0:01:35.199 ******** 2026-03-26 03:39:26.654163 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:39:26.654176 | orchestrator | 2026-03-26 03:39:26.654190 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2026-03-26 03:39:26.654204 | orchestrator | Thursday 26 March 2026 03:39:03 +0000 (0:00:05.523) 0:01:40.723 ******** 2026-03-26 03:39:26.654217 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:39:26.654230 | orchestrator | 2026-03-26 03:39:26.654242 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2026-03-26 03:39:26.654256 | orchestrator | Thursday 26 March 2026 03:39:03 +0000 (0:00:00.221) 0:01:40.944 ******** 2026-03-26 03:39:26.654270 | orchestrator | ok: [testbed-node-0] 2026-03-26 03:39:26.654283 | orchestrator | 2026-03-26 03:39:26.654297 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-26 03:39:26.654310 | orchestrator | Thursday 26 March 2026 03:39:08 +0000 (0:00:04.459) 0:01:45.403 ******** 2026-03-26 03:39:26.654323 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 03:39:26.654337 | orchestrator | 2026-03-26 03:39:26.654350 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2026-03-26 03:39:26.654364 | orchestrator | Thursday 26 March 2026 03:39:09 +0000 (0:00:01.158) 0:01:46.561 ******** 2026-03-26 03:39:26.654384 | orchestrator | changed: [testbed-node-1] 2026-03-26 03:39:26.654392 | orchestrator | changed: [testbed-node-2] 2026-03-26 03:39:26.654401 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:39:26.654409 | orchestrator | 2026-03-26 03:39:26.654417 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2026-03-26 03:39:26.654431 | orchestrator | Thursday 26 March 2026 03:39:14 +0000 (0:00:05.629) 0:01:52.191 ******** 2026-03-26 03:39:26.654439 | orchestrator | changed: [testbed-node-2] 2026-03-26 03:39:26.654447 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:39:26.654455 | orchestrator | changed: [testbed-node-1] 2026-03-26 03:39:26.654463 | orchestrator | 2026-03-26 03:39:26.654471 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2026-03-26 03:39:26.654479 | orchestrator | Thursday 26 March 2026 03:39:19 +0000 (0:00:04.313) 0:01:56.505 ******** 2026-03-26 03:39:26.654487 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:39:26.654495 | orchestrator | changed: [testbed-node-1] 2026-03-26 03:39:26.654503 | orchestrator | changed: [testbed-node-2] 2026-03-26 03:39:26.654511 | orchestrator | 2026-03-26 03:39:26.654519 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2026-03-26 03:39:26.654527 | orchestrator | Thursday 26 March 2026 03:39:20 +0000 (0:00:01.047) 0:01:57.552 ******** 2026-03-26 03:39:26.654535 | orchestrator | ok: [testbed-node-0] 2026-03-26 03:39:26.654542 | orchestrator | ok: [testbed-node-1] 2026-03-26 03:39:26.654550 | orchestrator | ok: [testbed-node-2] 2026-03-26 03:39:26.654558 | orchestrator | 2026-03-26 03:39:26.654566 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2026-03-26 03:39:26.654574 | orchestrator | Thursday 26 March 2026 03:39:21 +0000 (0:00:01.716) 0:01:59.268 ******** 2026-03-26 03:39:26.654582 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:39:26.654590 | orchestrator | changed: [testbed-node-1] 2026-03-26 03:39:26.654598 | orchestrator | changed: [testbed-node-2] 2026-03-26 03:39:26.654606 | orchestrator | 2026-03-26 03:39:26.654614 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2026-03-26 03:39:26.654622 | orchestrator | Thursday 26 March 2026 03:39:23 +0000 (0:00:01.276) 0:02:00.545 ******** 2026-03-26 03:39:26.654630 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:39:26.654638 | orchestrator | changed: [testbed-node-1] 2026-03-26 03:39:26.654646 | orchestrator | changed: [testbed-node-2] 2026-03-26 03:39:26.654654 | orchestrator | 2026-03-26 03:39:26.654662 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2026-03-26 03:39:26.654670 | orchestrator | Thursday 26 March 2026 03:39:24 +0000 (0:00:01.188) 0:02:01.734 ******** 2026-03-26 03:39:26.654678 | orchestrator | changed: [testbed-node-2] 2026-03-26 03:39:26.654686 | orchestrator | changed: [testbed-node-1] 2026-03-26 03:39:26.654694 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:39:26.654701 | orchestrator | 2026-03-26 03:39:26.654719 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2026-03-26 03:39:53.932046 | orchestrator | Thursday 26 March 2026 03:39:26 +0000 (0:00:02.205) 0:02:03.939 ******** 2026-03-26 03:39:53.932189 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:39:53.932215 | orchestrator | changed: [testbed-node-1] 2026-03-26 03:39:53.932234 | orchestrator | changed: [testbed-node-2] 2026-03-26 03:39:53.932253 | orchestrator | 2026-03-26 03:39:53.932270 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2026-03-26 03:39:53.932288 | orchestrator | Thursday 26 March 2026 03:39:28 +0000 (0:00:01.515) 0:02:05.455 ******** 2026-03-26 03:39:53.932304 | orchestrator | ok: [testbed-node-0] 2026-03-26 03:39:53.932322 | orchestrator | ok: [testbed-node-1] 2026-03-26 03:39:53.932340 | orchestrator | ok: [testbed-node-2] 2026-03-26 03:39:53.932357 | orchestrator | 2026-03-26 03:39:53.932374 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2026-03-26 03:39:53.932390 | orchestrator | Thursday 26 March 2026 03:39:28 +0000 (0:00:00.643) 0:02:06.099 ******** 2026-03-26 03:39:53.932406 | orchestrator | ok: [testbed-node-2] 2026-03-26 03:39:53.932457 | orchestrator | ok: [testbed-node-1] 2026-03-26 03:39:53.932476 | orchestrator | ok: [testbed-node-0] 2026-03-26 03:39:53.932493 | orchestrator | 2026-03-26 03:39:53.932511 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-26 03:39:53.932526 | orchestrator | Thursday 26 March 2026 03:39:31 +0000 (0:00:03.116) 0:02:09.216 ******** 2026-03-26 03:39:53.932541 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 03:39:53.932557 | orchestrator | 2026-03-26 03:39:53.932572 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2026-03-26 03:39:53.932588 | orchestrator | Thursday 26 March 2026 03:39:32 +0000 (0:00:00.575) 0:02:09.791 ******** 2026-03-26 03:39:53.932603 | orchestrator | ok: [testbed-node-0] 2026-03-26 03:39:53.932620 | orchestrator | 2026-03-26 03:39:53.932635 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-03-26 03:39:53.932650 | orchestrator | Thursday 26 March 2026 03:39:36 +0000 (0:00:03.990) 0:02:13.781 ******** 2026-03-26 03:39:53.932666 | orchestrator | ok: [testbed-node-0] 2026-03-26 03:39:53.932681 | orchestrator | 2026-03-26 03:39:53.932697 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2026-03-26 03:39:53.932714 | orchestrator | Thursday 26 March 2026 03:39:39 +0000 (0:00:03.294) 0:02:17.076 ******** 2026-03-26 03:39:53.932729 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-03-26 03:39:53.932746 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-03-26 03:39:53.932763 | orchestrator | 2026-03-26 03:39:53.932810 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2026-03-26 03:39:53.932828 | orchestrator | Thursday 26 March 2026 03:39:47 +0000 (0:00:07.349) 0:02:24.425 ******** 2026-03-26 03:39:53.932844 | orchestrator | ok: [testbed-node-0] 2026-03-26 03:39:53.932859 | orchestrator | 2026-03-26 03:39:53.932875 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2026-03-26 03:39:53.932892 | orchestrator | Thursday 26 March 2026 03:39:51 +0000 (0:00:04.216) 0:02:28.641 ******** 2026-03-26 03:39:53.932909 | orchestrator | ok: [testbed-node-0] 2026-03-26 03:39:53.932924 | orchestrator | ok: [testbed-node-1] 2026-03-26 03:39:53.932941 | orchestrator | ok: [testbed-node-2] 2026-03-26 03:39:53.932958 | orchestrator | 2026-03-26 03:39:53.932975 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2026-03-26 03:39:53.932991 | orchestrator | Thursday 26 March 2026 03:39:51 +0000 (0:00:00.555) 0:02:29.197 ******** 2026-03-26 03:39:53.933031 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-26 03:39:53.933081 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-26 03:39:53.933108 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-26 03:39:53.933120 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-26 03:39:53.933131 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-26 03:39:53.933146 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-26 03:39:53.933158 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-26 03:39:53.933168 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-26 03:39:53.933194 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-26 03:39:55.471580 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-26 03:39:55.471706 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-26 03:39:55.471725 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-26 03:39:55.471756 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-26 03:39:55.471837 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-26 03:39:55.471873 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-26 03:39:55.471886 | orchestrator | 2026-03-26 03:39:55.471900 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2026-03-26 03:39:55.471913 | orchestrator | Thursday 26 March 2026 03:39:54 +0000 (0:00:02.522) 0:02:31.719 ******** 2026-03-26 03:39:55.471924 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:39:55.471937 | orchestrator | 2026-03-26 03:39:55.471984 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2026-03-26 03:39:55.471995 | orchestrator | Thursday 26 March 2026 03:39:54 +0000 (0:00:00.140) 0:02:31.860 ******** 2026-03-26 03:39:55.472005 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:39:55.472032 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:39:55.472043 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:39:55.472053 | orchestrator | 2026-03-26 03:39:55.472063 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2026-03-26 03:39:55.472073 | orchestrator | Thursday 26 March 2026 03:39:54 +0000 (0:00:00.311) 0:02:32.171 ******** 2026-03-26 03:39:55.472084 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-26 03:39:55.472096 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-26 03:39:55.472113 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-26 03:39:55.472143 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-26 03:39:55.472161 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-26 03:39:55.472172 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:39:55.472191 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-26 03:40:00.299030 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-26 03:40:00.299148 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-26 03:40:00.299184 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-26 03:40:00.299199 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-26 03:40:00.299249 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:40:00.299276 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-26 03:40:00.299290 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-26 03:40:00.299322 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-26 03:40:00.299334 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-26 03:40:00.299351 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-26 03:40:00.299372 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:40:00.299412 | orchestrator | 2026-03-26 03:40:00.299426 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-26 03:40:00.299439 | orchestrator | Thursday 26 March 2026 03:39:55 +0000 (0:00:00.704) 0:02:32.876 ******** 2026-03-26 03:40:00.299451 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 03:40:00.299462 | orchestrator | 2026-03-26 03:40:00.299474 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2026-03-26 03:40:00.299484 | orchestrator | Thursday 26 March 2026 03:39:56 +0000 (0:00:00.754) 0:02:33.631 ******** 2026-03-26 03:40:00.299497 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-26 03:40:00.299510 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-26 03:40:00.299531 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-26 03:40:01.862971 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-26 03:40:01.863113 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-26 03:40:01.863131 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-26 03:40:01.863144 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-26 03:40:01.863157 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-26 03:40:01.863169 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-26 03:40:01.863202 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-26 03:40:01.863223 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-26 03:40:01.863262 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-26 03:40:01.863283 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-26 03:40:01.863303 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-26 03:40:01.863323 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-26 03:40:01.863344 | orchestrator | 2026-03-26 03:40:01.863367 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2026-03-26 03:40:01.863388 | orchestrator | Thursday 26 March 2026 03:40:01 +0000 (0:00:04.963) 0:02:38.594 ******** 2026-03-26 03:40:01.863422 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-26 03:40:01.964067 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-26 03:40:01.964167 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-26 03:40:01.964180 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-26 03:40:01.964190 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-26 03:40:01.964198 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:40:01.964208 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-26 03:40:01.964217 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-26 03:40:01.964253 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-26 03:40:01.964277 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-26 03:40:01.964285 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-26 03:40:01.964292 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:40:01.964299 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-26 03:40:01.964307 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-26 03:40:01.964314 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-26 03:40:01.964341 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-26 03:40:02.807054 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-26 03:40:02.807159 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:40:02.807178 | orchestrator | 2026-03-26 03:40:02.807192 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2026-03-26 03:40:02.807205 | orchestrator | Thursday 26 March 2026 03:40:01 +0000 (0:00:00.668) 0:02:39.262 ******** 2026-03-26 03:40:02.807218 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-26 03:40:02.807232 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-26 03:40:02.807244 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-26 03:40:02.807257 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-26 03:40:02.807319 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-26 03:40:02.807341 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:40:02.807368 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-26 03:40:02.807388 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-26 03:40:02.807406 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-26 03:40:02.807423 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-26 03:40:02.807459 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-26 03:40:02.807480 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:40:02.807520 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-26 03:40:07.634114 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-26 03:40:07.634212 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-26 03:40:07.634224 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-26 03:40:07.634233 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-26 03:40:07.634265 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:40:07.634274 | orchestrator | 2026-03-26 03:40:07.634281 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2026-03-26 03:40:07.634289 | orchestrator | Thursday 26 March 2026 03:40:03 +0000 (0:00:01.337) 0:02:40.600 ******** 2026-03-26 03:40:07.634297 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-26 03:40:07.634332 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-26 03:40:07.634339 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-26 03:40:07.634345 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-26 03:40:07.634352 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-26 03:40:07.634365 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-26 03:40:07.634372 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-26 03:40:07.634388 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-26 03:40:24.144279 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-26 03:40:24.144422 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-26 03:40:24.144448 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-26 03:40:24.144501 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-26 03:40:24.144522 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-26 03:40:24.144541 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-26 03:40:24.144603 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-26 03:40:24.144626 | orchestrator | 2026-03-26 03:40:24.144646 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2026-03-26 03:40:24.144666 | orchestrator | Thursday 26 March 2026 03:40:08 +0000 (0:00:05.318) 0:02:45.919 ******** 2026-03-26 03:40:24.144684 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-03-26 03:40:24.144704 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-03-26 03:40:24.144723 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-03-26 03:40:24.144743 | orchestrator | 2026-03-26 03:40:24.144794 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2026-03-26 03:40:24.144822 | orchestrator | Thursday 26 March 2026 03:40:10 +0000 (0:00:01.698) 0:02:47.617 ******** 2026-03-26 03:40:24.144851 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-26 03:40:24.144898 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-26 03:40:24.144926 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-26 03:40:24.144978 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-26 03:40:39.978408 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-26 03:40:39.978529 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-26 03:40:39.978547 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-26 03:40:39.978585 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-26 03:40:39.978598 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-26 03:40:39.978610 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-26 03:40:39.978656 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-26 03:40:39.978669 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-26 03:40:39.978681 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-26 03:40:39.978701 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-26 03:40:39.978713 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-26 03:40:39.978725 | orchestrator | 2026-03-26 03:40:39.978738 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2026-03-26 03:40:39.978751 | orchestrator | Thursday 26 March 2026 03:40:27 +0000 (0:00:17.437) 0:03:05.055 ******** 2026-03-26 03:40:39.978796 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:40:39.978810 | orchestrator | changed: [testbed-node-1] 2026-03-26 03:40:39.978822 | orchestrator | changed: [testbed-node-2] 2026-03-26 03:40:39.978834 | orchestrator | 2026-03-26 03:40:39.978845 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2026-03-26 03:40:39.978856 | orchestrator | Thursday 26 March 2026 03:40:29 +0000 (0:00:01.902) 0:03:06.958 ******** 2026-03-26 03:40:39.978867 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-03-26 03:40:39.978879 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-03-26 03:40:39.978890 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-03-26 03:40:39.978901 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-03-26 03:40:39.978911 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-03-26 03:40:39.978922 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-03-26 03:40:39.978933 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-03-26 03:40:39.978944 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-03-26 03:40:39.978954 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-03-26 03:40:39.978965 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-03-26 03:40:39.978976 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-03-26 03:40:39.978987 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-03-26 03:40:39.978998 | orchestrator | 2026-03-26 03:40:39.979008 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2026-03-26 03:40:39.979026 | orchestrator | Thursday 26 March 2026 03:40:34 +0000 (0:00:05.103) 0:03:12.061 ******** 2026-03-26 03:40:39.979037 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-03-26 03:40:39.979048 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-03-26 03:40:39.979069 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-03-26 03:40:48.489494 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-03-26 03:40:48.489602 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-03-26 03:40:48.489617 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-03-26 03:40:48.489629 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-03-26 03:40:48.489640 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-03-26 03:40:48.489651 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-03-26 03:40:48.489662 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-03-26 03:40:48.489673 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-03-26 03:40:48.489684 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-03-26 03:40:48.489695 | orchestrator | 2026-03-26 03:40:48.489707 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2026-03-26 03:40:48.489719 | orchestrator | Thursday 26 March 2026 03:40:39 +0000 (0:00:05.205) 0:03:17.266 ******** 2026-03-26 03:40:48.489730 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-03-26 03:40:48.489740 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-03-26 03:40:48.489751 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-03-26 03:40:48.489832 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-03-26 03:40:48.489844 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-03-26 03:40:48.489855 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-03-26 03:40:48.489866 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-03-26 03:40:48.489877 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-03-26 03:40:48.489888 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-03-26 03:40:48.489899 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-03-26 03:40:48.489914 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-03-26 03:40:48.489932 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-03-26 03:40:48.489951 | orchestrator | 2026-03-26 03:40:48.489969 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2026-03-26 03:40:48.489987 | orchestrator | Thursday 26 March 2026 03:40:45 +0000 (0:00:05.318) 0:03:22.585 ******** 2026-03-26 03:40:48.490010 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-26 03:40:48.490117 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-26 03:40:48.490228 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-26 03:40:48.490258 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-26 03:40:48.490281 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-26 03:40:48.490301 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-26 03:40:48.490322 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-26 03:40:48.490337 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-26 03:40:48.490366 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-26 03:40:48.490389 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-26 03:42:17.343315 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-26 03:42:17.343407 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-26 03:42:17.343418 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-26 03:42:17.343426 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-26 03:42:17.343452 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-26 03:42:17.343461 | orchestrator | 2026-03-26 03:42:17.343470 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-26 03:42:17.343478 | orchestrator | Thursday 26 March 2026 03:40:49 +0000 (0:00:04.089) 0:03:26.675 ******** 2026-03-26 03:42:17.343485 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:42:17.343493 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:42:17.343500 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:42:17.343507 | orchestrator | 2026-03-26 03:42:17.343525 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2026-03-26 03:42:17.343533 | orchestrator | Thursday 26 March 2026 03:40:49 +0000 (0:00:00.588) 0:03:27.263 ******** 2026-03-26 03:42:17.343539 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:42:17.343546 | orchestrator | 2026-03-26 03:42:17.343553 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2026-03-26 03:42:17.343560 | orchestrator | Thursday 26 March 2026 03:40:52 +0000 (0:00:02.237) 0:03:29.501 ******** 2026-03-26 03:42:17.343567 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:42:17.343573 | orchestrator | 2026-03-26 03:42:17.343580 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2026-03-26 03:42:17.343587 | orchestrator | Thursday 26 March 2026 03:40:54 +0000 (0:00:02.128) 0:03:31.630 ******** 2026-03-26 03:42:17.343594 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:42:17.343600 | orchestrator | 2026-03-26 03:42:17.343607 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2026-03-26 03:42:17.343616 | orchestrator | Thursday 26 March 2026 03:40:56 +0000 (0:00:02.233) 0:03:33.863 ******** 2026-03-26 03:42:17.343635 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:42:17.343643 | orchestrator | 2026-03-26 03:42:17.343650 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2026-03-26 03:42:17.343656 | orchestrator | Thursday 26 March 2026 03:40:58 +0000 (0:00:02.189) 0:03:36.053 ******** 2026-03-26 03:42:17.343663 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:42:17.343670 | orchestrator | 2026-03-26 03:42:17.343677 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-03-26 03:42:17.343684 | orchestrator | Thursday 26 March 2026 03:41:21 +0000 (0:00:23.142) 0:03:59.196 ******** 2026-03-26 03:42:17.343690 | orchestrator | 2026-03-26 03:42:17.343697 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-03-26 03:42:17.343704 | orchestrator | Thursday 26 March 2026 03:41:21 +0000 (0:00:00.073) 0:03:59.269 ******** 2026-03-26 03:42:17.343710 | orchestrator | 2026-03-26 03:42:17.343717 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-03-26 03:42:17.343724 | orchestrator | Thursday 26 March 2026 03:41:22 +0000 (0:00:00.070) 0:03:59.340 ******** 2026-03-26 03:42:17.343730 | orchestrator | 2026-03-26 03:42:17.343737 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2026-03-26 03:42:17.343744 | orchestrator | Thursday 26 March 2026 03:41:22 +0000 (0:00:00.071) 0:03:59.411 ******** 2026-03-26 03:42:17.343751 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:42:17.343781 | orchestrator | changed: [testbed-node-1] 2026-03-26 03:42:17.343793 | orchestrator | changed: [testbed-node-2] 2026-03-26 03:42:17.343800 | orchestrator | 2026-03-26 03:42:17.343807 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2026-03-26 03:42:17.343814 | orchestrator | Thursday 26 March 2026 03:41:39 +0000 (0:00:17.243) 0:04:16.654 ******** 2026-03-26 03:42:17.343826 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:42:17.343834 | orchestrator | changed: [testbed-node-1] 2026-03-26 03:42:17.343840 | orchestrator | changed: [testbed-node-2] 2026-03-26 03:42:17.343847 | orchestrator | 2026-03-26 03:42:17.343854 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2026-03-26 03:42:17.343865 | orchestrator | Thursday 26 March 2026 03:41:50 +0000 (0:00:11.439) 0:04:28.093 ******** 2026-03-26 03:42:17.343876 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:42:17.343889 | orchestrator | changed: [testbed-node-1] 2026-03-26 03:42:17.343905 | orchestrator | changed: [testbed-node-2] 2026-03-26 03:42:17.343917 | orchestrator | 2026-03-26 03:42:17.343927 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2026-03-26 03:42:17.343937 | orchestrator | Thursday 26 March 2026 03:42:01 +0000 (0:00:10.580) 0:04:38.674 ******** 2026-03-26 03:42:17.343948 | orchestrator | changed: [testbed-node-2] 2026-03-26 03:42:17.343959 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:42:17.343970 | orchestrator | changed: [testbed-node-1] 2026-03-26 03:42:17.343980 | orchestrator | 2026-03-26 03:42:17.343990 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2026-03-26 03:42:17.343999 | orchestrator | Thursday 26 March 2026 03:42:11 +0000 (0:00:10.104) 0:04:48.778 ******** 2026-03-26 03:42:17.344009 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:42:17.344021 | orchestrator | changed: [testbed-node-2] 2026-03-26 03:42:17.344031 | orchestrator | changed: [testbed-node-1] 2026-03-26 03:42:17.344043 | orchestrator | 2026-03-26 03:42:17.344052 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-26 03:42:17.344064 | orchestrator | testbed-node-0 : ok=57  changed=38  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-26 03:42:17.344077 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-26 03:42:17.344088 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-26 03:42:17.344100 | orchestrator | 2026-03-26 03:42:17.344110 | orchestrator | 2026-03-26 03:42:17.344121 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-26 03:42:17.344133 | orchestrator | Thursday 26 March 2026 03:42:17 +0000 (0:00:05.841) 0:04:54.619 ******** 2026-03-26 03:42:17.344145 | orchestrator | =============================================================================== 2026-03-26 03:42:17.344157 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 23.14s 2026-03-26 03:42:17.344168 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 17.44s 2026-03-26 03:42:17.344180 | orchestrator | octavia : Restart octavia-api container -------------------------------- 17.24s 2026-03-26 03:42:17.344192 | orchestrator | octavia : Add rules for security groups -------------------------------- 16.92s 2026-03-26 03:42:17.344200 | orchestrator | octavia : Adding octavia related roles --------------------------------- 16.05s 2026-03-26 03:42:17.344214 | orchestrator | octavia : Restart octavia-driver-agent container ----------------------- 11.44s 2026-03-26 03:42:17.344222 | orchestrator | octavia : Restart octavia-health-manager container --------------------- 10.58s 2026-03-26 03:42:17.344230 | orchestrator | octavia : Restart octavia-housekeeping container ----------------------- 10.10s 2026-03-26 03:42:17.344237 | orchestrator | octavia : Create security groups for octavia --------------------------- 10.05s 2026-03-26 03:42:17.344245 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 8.32s 2026-03-26 03:42:17.344254 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.62s 2026-03-26 03:42:17.344264 | orchestrator | octavia : Get security groups for octavia ------------------------------- 7.35s 2026-03-26 03:42:17.344271 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.54s 2026-03-26 03:42:17.344285 | orchestrator | octavia : Restart octavia-worker container ------------------------------ 5.84s 2026-03-26 03:42:17.344298 | orchestrator | octavia : Create ports for Octavia health-manager nodes ----------------- 5.63s 2026-03-26 03:42:17.708124 | orchestrator | octavia : Create loadbalancer management subnet ------------------------- 5.52s 2026-03-26 03:42:17.708201 | orchestrator | octavia : Copying certificate files for octavia-health-manager ---------- 5.32s 2026-03-26 03:42:17.708210 | orchestrator | octavia : Copying over config.json files for services ------------------- 5.32s 2026-03-26 03:42:17.708217 | orchestrator | octavia : Copying certificate files for octavia-housekeeping ------------ 5.21s 2026-03-26 03:42:17.708224 | orchestrator | octavia : Copying certificate files for octavia-worker ------------------ 5.10s 2026-03-26 03:42:20.130185 | orchestrator | 2026-03-26 03:42:20 | INFO  | Task ced74339-9146-4768-949d-5ed80733d53c (ceilometer) was prepared for execution. 2026-03-26 03:42:20.130261 | orchestrator | 2026-03-26 03:42:20 | INFO  | It takes a moment until task ced74339-9146-4768-949d-5ed80733d53c (ceilometer) has been started and output is visible here. 2026-03-26 03:42:44.377950 | orchestrator | 2026-03-26 03:42:44.378119 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-26 03:42:44.378164 | orchestrator | 2026-03-26 03:42:44.378177 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-26 03:42:44.378189 | orchestrator | Thursday 26 March 2026 03:42:24 +0000 (0:00:00.277) 0:00:00.277 ******** 2026-03-26 03:42:44.378201 | orchestrator | ok: [testbed-node-0] 2026-03-26 03:42:44.378213 | orchestrator | ok: [testbed-node-1] 2026-03-26 03:42:44.378225 | orchestrator | ok: [testbed-node-2] 2026-03-26 03:42:44.378237 | orchestrator | ok: [testbed-node-3] 2026-03-26 03:42:44.378248 | orchestrator | ok: [testbed-node-4] 2026-03-26 03:42:44.378259 | orchestrator | ok: [testbed-node-5] 2026-03-26 03:42:44.378269 | orchestrator | 2026-03-26 03:42:44.378281 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-26 03:42:44.378292 | orchestrator | Thursday 26 March 2026 03:42:25 +0000 (0:00:00.720) 0:00:00.998 ******** 2026-03-26 03:42:44.378303 | orchestrator | ok: [testbed-node-0] => (item=enable_ceilometer_True) 2026-03-26 03:42:44.378315 | orchestrator | ok: [testbed-node-1] => (item=enable_ceilometer_True) 2026-03-26 03:42:44.378325 | orchestrator | ok: [testbed-node-2] => (item=enable_ceilometer_True) 2026-03-26 03:42:44.378336 | orchestrator | ok: [testbed-node-3] => (item=enable_ceilometer_True) 2026-03-26 03:42:44.378347 | orchestrator | ok: [testbed-node-4] => (item=enable_ceilometer_True) 2026-03-26 03:42:44.378358 | orchestrator | ok: [testbed-node-5] => (item=enable_ceilometer_True) 2026-03-26 03:42:44.378369 | orchestrator | 2026-03-26 03:42:44.378380 | orchestrator | PLAY [Apply role ceilometer] *************************************************** 2026-03-26 03:42:44.378391 | orchestrator | 2026-03-26 03:42:44.378402 | orchestrator | TASK [ceilometer : include_tasks] ********************************************** 2026-03-26 03:42:44.378415 | orchestrator | Thursday 26 March 2026 03:42:25 +0000 (0:00:00.636) 0:00:01.635 ******** 2026-03-26 03:42:44.378428 | orchestrator | included: /ansible/roles/ceilometer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-26 03:42:44.378442 | orchestrator | 2026-03-26 03:42:44.378455 | orchestrator | TASK [service-ks-register : ceilometer | Creating services] ******************** 2026-03-26 03:42:44.378468 | orchestrator | Thursday 26 March 2026 03:42:27 +0000 (0:00:01.271) 0:00:02.906 ******** 2026-03-26 03:42:44.378480 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:42:44.378492 | orchestrator | 2026-03-26 03:42:44.378504 | orchestrator | TASK [service-ks-register : ceilometer | Creating endpoints] ******************* 2026-03-26 03:42:44.378517 | orchestrator | Thursday 26 March 2026 03:42:27 +0000 (0:00:00.131) 0:00:03.038 ******** 2026-03-26 03:42:44.378530 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:42:44.378542 | orchestrator | 2026-03-26 03:42:44.378554 | orchestrator | TASK [service-ks-register : ceilometer | Creating projects] ******************** 2026-03-26 03:42:44.378593 | orchestrator | Thursday 26 March 2026 03:42:27 +0000 (0:00:00.136) 0:00:03.174 ******** 2026-03-26 03:42:44.378606 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-26 03:42:44.378618 | orchestrator | 2026-03-26 03:42:44.378631 | orchestrator | TASK [service-ks-register : ceilometer | Creating users] *********************** 2026-03-26 03:42:44.378643 | orchestrator | Thursday 26 March 2026 03:42:31 +0000 (0:00:03.889) 0:00:07.064 ******** 2026-03-26 03:42:44.378656 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-26 03:42:44.378668 | orchestrator | changed: [testbed-node-0] => (item=ceilometer -> service) 2026-03-26 03:42:44.378679 | orchestrator | 2026-03-26 03:42:44.378690 | orchestrator | TASK [service-ks-register : ceilometer | Creating roles] *********************** 2026-03-26 03:42:44.378701 | orchestrator | Thursday 26 March 2026 03:42:35 +0000 (0:00:04.001) 0:00:11.065 ******** 2026-03-26 03:42:44.378712 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-26 03:42:44.378723 | orchestrator | 2026-03-26 03:42:44.378734 | orchestrator | TASK [service-ks-register : ceilometer | Granting user roles] ****************** 2026-03-26 03:42:44.378785 | orchestrator | Thursday 26 March 2026 03:42:38 +0000 (0:00:03.231) 0:00:14.296 ******** 2026-03-26 03:42:44.378799 | orchestrator | changed: [testbed-node-0] => (item=ceilometer -> service -> admin) 2026-03-26 03:42:44.378810 | orchestrator | 2026-03-26 03:42:44.378821 | orchestrator | TASK [ceilometer : Associate the ResellerAdmin role and ceilometer user] ******* 2026-03-26 03:42:44.378832 | orchestrator | Thursday 26 March 2026 03:42:42 +0000 (0:00:04.305) 0:00:18.602 ******** 2026-03-26 03:42:44.378843 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:42:44.378853 | orchestrator | 2026-03-26 03:42:44.378864 | orchestrator | TASK [ceilometer : Ensuring config directories exist] ************************** 2026-03-26 03:42:44.378875 | orchestrator | Thursday 26 March 2026 03:42:42 +0000 (0:00:00.139) 0:00:18.742 ******** 2026-03-26 03:42:44.378890 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-26 03:42:44.378926 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-26 03:42:44.378939 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-26 03:42:44.378951 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-26 03:42:44.378973 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-26 03:42:44.378986 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-26 03:42:44.379005 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-26 03:42:44.379034 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-26 03:42:49.300572 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-26 03:42:49.301620 | orchestrator | 2026-03-26 03:42:49.301682 | orchestrator | TASK [ceilometer : Check if the folder for custom meter definitions exist] ***** 2026-03-26 03:42:49.301741 | orchestrator | Thursday 26 March 2026 03:42:44 +0000 (0:00:01.453) 0:00:20.196 ******** 2026-03-26 03:42:49.301813 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-26 03:42:49.301835 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-26 03:42:49.301853 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-26 03:42:49.301872 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-26 03:42:49.301891 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-26 03:42:49.301909 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-26 03:42:49.301927 | orchestrator | 2026-03-26 03:42:49.301945 | orchestrator | TASK [ceilometer : Set variable that indicates if we have a folder for custom meter YAML files] *** 2026-03-26 03:42:49.301964 | orchestrator | Thursday 26 March 2026 03:42:46 +0000 (0:00:01.683) 0:00:21.880 ******** 2026-03-26 03:42:49.301983 | orchestrator | ok: [testbed-node-0] 2026-03-26 03:42:49.302002 | orchestrator | ok: [testbed-node-1] 2026-03-26 03:42:49.302088 | orchestrator | ok: [testbed-node-2] 2026-03-26 03:42:49.302111 | orchestrator | ok: [testbed-node-3] 2026-03-26 03:42:49.302129 | orchestrator | ok: [testbed-node-4] 2026-03-26 03:42:49.302149 | orchestrator | ok: [testbed-node-5] 2026-03-26 03:42:49.302167 | orchestrator | 2026-03-26 03:42:49.302186 | orchestrator | TASK [ceilometer : Find all *.yaml files in custom meter definitions folder (if the folder exist)] *** 2026-03-26 03:42:49.302205 | orchestrator | Thursday 26 March 2026 03:42:46 +0000 (0:00:00.623) 0:00:22.503 ******** 2026-03-26 03:42:49.302224 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:42:49.302244 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:42:49.302264 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:42:49.302283 | orchestrator | skipping: [testbed-node-3] 2026-03-26 03:42:49.302303 | orchestrator | skipping: [testbed-node-4] 2026-03-26 03:42:49.302322 | orchestrator | skipping: [testbed-node-5] 2026-03-26 03:42:49.302342 | orchestrator | 2026-03-26 03:42:49.302361 | orchestrator | TASK [ceilometer : Set the variable that control the copy of custom meter definitions] *** 2026-03-26 03:42:49.302382 | orchestrator | Thursday 26 March 2026 03:42:47 +0000 (0:00:00.861) 0:00:23.364 ******** 2026-03-26 03:42:49.302401 | orchestrator | ok: [testbed-node-0] 2026-03-26 03:42:49.302421 | orchestrator | ok: [testbed-node-1] 2026-03-26 03:42:49.302439 | orchestrator | ok: [testbed-node-2] 2026-03-26 03:42:49.302460 | orchestrator | ok: [testbed-node-3] 2026-03-26 03:42:49.302477 | orchestrator | ok: [testbed-node-4] 2026-03-26 03:42:49.302561 | orchestrator | ok: [testbed-node-5] 2026-03-26 03:42:49.302585 | orchestrator | 2026-03-26 03:42:49.302604 | orchestrator | TASK [ceilometer : Create default folder for custom meter definitions] ********* 2026-03-26 03:42:49.302624 | orchestrator | Thursday 26 March 2026 03:42:48 +0000 (0:00:00.685) 0:00:24.049 ******** 2026-03-26 03:42:49.302654 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-26 03:42:49.302784 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-26 03:42:49.302828 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:42:49.302881 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-26 03:42:49.302959 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-26 03:42:49.302985 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:42:49.303006 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-26 03:42:49.303026 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-26 03:42:49.303055 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-26 03:42:49.303076 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:42:49.303095 | orchestrator | skipping: [testbed-node-3] 2026-03-26 03:42:49.303115 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-26 03:42:49.303147 | orchestrator | skipping: [testbed-node-4] 2026-03-26 03:42:49.303179 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-26 03:42:54.131238 | orchestrator | skipping: [testbed-node-5] 2026-03-26 03:42:54.131384 | orchestrator | 2026-03-26 03:42:54.131400 | orchestrator | TASK [ceilometer : Copying custom meter definitions to Ceilometer] ************* 2026-03-26 03:42:54.131409 | orchestrator | Thursday 26 March 2026 03:42:49 +0000 (0:00:01.070) 0:00:25.120 ******** 2026-03-26 03:42:54.131418 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-26 03:42:54.131428 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-26 03:42:54.131438 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:42:54.131465 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-26 03:42:54.131478 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-26 03:42:54.131507 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:42:54.131514 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-26 03:42:54.131521 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-26 03:42:54.131528 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:42:54.131547 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-26 03:42:54.131555 | orchestrator | skipping: [testbed-node-3] 2026-03-26 03:42:54.131562 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-26 03:42:54.131568 | orchestrator | skipping: [testbed-node-4] 2026-03-26 03:42:54.131579 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-26 03:42:54.131586 | orchestrator | skipping: [testbed-node-5] 2026-03-26 03:42:54.131592 | orchestrator | 2026-03-26 03:42:54.131599 | orchestrator | TASK [ceilometer : Check if the folder ["/opt/configuration/environments/kolla/files/overlays/ceilometer/pollsters.d"] for dynamic pollsters definitions exist] *** 2026-03-26 03:42:54.131613 | orchestrator | Thursday 26 March 2026 03:42:50 +0000 (0:00:00.871) 0:00:25.991 ******** 2026-03-26 03:42:54.131619 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-26 03:42:54.131626 | orchestrator | 2026-03-26 03:42:54.131633 | orchestrator | TASK [ceilometer : Set the variable that control the copy of dynamic pollsters definitions] *** 2026-03-26 03:42:54.131640 | orchestrator | Thursday 26 March 2026 03:42:50 +0000 (0:00:00.712) 0:00:26.703 ******** 2026-03-26 03:42:54.131646 | orchestrator | ok: [testbed-node-0] 2026-03-26 03:42:54.131653 | orchestrator | ok: [testbed-node-1] 2026-03-26 03:42:54.131660 | orchestrator | ok: [testbed-node-2] 2026-03-26 03:42:54.131666 | orchestrator | ok: [testbed-node-3] 2026-03-26 03:42:54.131672 | orchestrator | ok: [testbed-node-4] 2026-03-26 03:42:54.131678 | orchestrator | ok: [testbed-node-5] 2026-03-26 03:42:54.131684 | orchestrator | 2026-03-26 03:42:54.131690 | orchestrator | TASK [ceilometer : Clean default folder for dynamic pollsters definitions] ***** 2026-03-26 03:42:54.131697 | orchestrator | Thursday 26 March 2026 03:42:51 +0000 (0:00:00.846) 0:00:27.550 ******** 2026-03-26 03:42:54.131703 | orchestrator | ok: [testbed-node-0] 2026-03-26 03:42:54.131709 | orchestrator | ok: [testbed-node-1] 2026-03-26 03:42:54.131715 | orchestrator | ok: [testbed-node-2] 2026-03-26 03:42:54.131721 | orchestrator | ok: [testbed-node-3] 2026-03-26 03:42:54.131727 | orchestrator | ok: [testbed-node-4] 2026-03-26 03:42:54.131733 | orchestrator | ok: [testbed-node-5] 2026-03-26 03:42:54.131740 | orchestrator | 2026-03-26 03:42:54.131746 | orchestrator | TASK [ceilometer : Create default folder for dynamic pollsters definitions] **** 2026-03-26 03:42:54.131752 | orchestrator | Thursday 26 March 2026 03:42:52 +0000 (0:00:00.943) 0:00:28.494 ******** 2026-03-26 03:42:54.131799 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:42:54.131807 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:42:54.131814 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:42:54.131821 | orchestrator | skipping: [testbed-node-3] 2026-03-26 03:42:54.131828 | orchestrator | skipping: [testbed-node-4] 2026-03-26 03:42:54.131834 | orchestrator | skipping: [testbed-node-5] 2026-03-26 03:42:54.131841 | orchestrator | 2026-03-26 03:42:54.131848 | orchestrator | TASK [ceilometer : Copying dynamic pollsters definitions] ********************** 2026-03-26 03:42:54.131855 | orchestrator | Thursday 26 March 2026 03:42:53 +0000 (0:00:00.835) 0:00:29.329 ******** 2026-03-26 03:42:54.131863 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:42:54.131870 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:42:54.131877 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:42:54.131884 | orchestrator | skipping: [testbed-node-3] 2026-03-26 03:42:54.131891 | orchestrator | skipping: [testbed-node-4] 2026-03-26 03:42:54.131898 | orchestrator | skipping: [testbed-node-5] 2026-03-26 03:42:54.131905 | orchestrator | 2026-03-26 03:42:59.205727 | orchestrator | TASK [ceilometer : Check if custom polling.yaml exists] ************************ 2026-03-26 03:42:59.205869 | orchestrator | Thursday 26 March 2026 03:42:54 +0000 (0:00:00.625) 0:00:29.954 ******** 2026-03-26 03:42:59.205886 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-26 03:42:59.205900 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-26 03:42:59.205911 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-26 03:42:59.205923 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-26 03:42:59.205934 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-26 03:42:59.205945 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-26 03:42:59.205956 | orchestrator | 2026-03-26 03:42:59.205967 | orchestrator | TASK [ceilometer : Copying over polling.yaml] ********************************** 2026-03-26 03:42:59.205979 | orchestrator | Thursday 26 March 2026 03:42:55 +0000 (0:00:01.632) 0:00:31.586 ******** 2026-03-26 03:42:59.205993 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-26 03:42:59.206092 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-26 03:42:59.206108 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:42:59.206134 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-26 03:42:59.206175 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-26 03:42:59.206188 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:42:59.206200 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-26 03:42:59.206233 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-26 03:42:59.206247 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:42:59.206261 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-26 03:42:59.206283 | orchestrator | skipping: [testbed-node-3] 2026-03-26 03:42:59.206297 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-26 03:42:59.206309 | orchestrator | skipping: [testbed-node-4] 2026-03-26 03:42:59.206329 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-26 03:42:59.206342 | orchestrator | skipping: [testbed-node-5] 2026-03-26 03:42:59.206354 | orchestrator | 2026-03-26 03:42:59.206368 | orchestrator | TASK [ceilometer : Set ceilometer polling file's path] ************************* 2026-03-26 03:42:59.206380 | orchestrator | Thursday 26 March 2026 03:42:56 +0000 (0:00:00.814) 0:00:32.400 ******** 2026-03-26 03:42:59.206394 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:42:59.206406 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:42:59.206418 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:42:59.206430 | orchestrator | skipping: [testbed-node-3] 2026-03-26 03:42:59.206444 | orchestrator | skipping: [testbed-node-4] 2026-03-26 03:42:59.206456 | orchestrator | skipping: [testbed-node-5] 2026-03-26 03:42:59.206469 | orchestrator | 2026-03-26 03:42:59.206481 | orchestrator | TASK [ceilometer : Check custom gnocchi_resources.yaml exists] ***************** 2026-03-26 03:42:59.206494 | orchestrator | Thursday 26 March 2026 03:42:57 +0000 (0:00:00.823) 0:00:33.224 ******** 2026-03-26 03:42:59.206507 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-26 03:42:59.206520 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-26 03:42:59.206532 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-26 03:42:59.206545 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-26 03:42:59.206557 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-26 03:42:59.206569 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-26 03:42:59.206581 | orchestrator | 2026-03-26 03:42:59.206594 | orchestrator | TASK [ceilometer : Copying over gnocchi_resources.yaml] ************************ 2026-03-26 03:42:59.206607 | orchestrator | Thursday 26 March 2026 03:42:58 +0000 (0:00:01.344) 0:00:34.569 ******** 2026-03-26 03:42:59.206628 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-26 03:43:05.287632 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-26 03:43:05.287800 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:43:05.287823 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-26 03:43:05.287856 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-26 03:43:05.287868 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:43:05.287880 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-26 03:43:05.287892 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-26 03:43:05.287904 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:43:05.287916 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-26 03:43:05.287952 | orchestrator | skipping: [testbed-node-3] 2026-03-26 03:43:05.287981 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-26 03:43:05.287993 | orchestrator | skipping: [testbed-node-4] 2026-03-26 03:43:05.288005 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-26 03:43:05.288016 | orchestrator | skipping: [testbed-node-5] 2026-03-26 03:43:05.288028 | orchestrator | 2026-03-26 03:43:05.288047 | orchestrator | TASK [ceilometer : Set ceilometer gnocchi_resources file's path] *************** 2026-03-26 03:43:05.288067 | orchestrator | Thursday 26 March 2026 03:42:59 +0000 (0:00:01.094) 0:00:35.664 ******** 2026-03-26 03:43:05.288086 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:43:05.288104 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:43:05.288122 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:43:05.288139 | orchestrator | skipping: [testbed-node-3] 2026-03-26 03:43:05.288157 | orchestrator | skipping: [testbed-node-4] 2026-03-26 03:43:05.288185 | orchestrator | skipping: [testbed-node-5] 2026-03-26 03:43:05.288204 | orchestrator | 2026-03-26 03:43:05.288224 | orchestrator | TASK [ceilometer : Check if policies shall be overwritten] ********************* 2026-03-26 03:43:05.288243 | orchestrator | Thursday 26 March 2026 03:43:00 +0000 (0:00:00.877) 0:00:36.542 ******** 2026-03-26 03:43:05.288264 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:43:05.288285 | orchestrator | 2026-03-26 03:43:05.288305 | orchestrator | TASK [ceilometer : Set ceilometer policy file] ********************************* 2026-03-26 03:43:05.288325 | orchestrator | Thursday 26 March 2026 03:43:00 +0000 (0:00:00.148) 0:00:36.690 ******** 2026-03-26 03:43:05.288344 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:43:05.288365 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:43:05.288384 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:43:05.288401 | orchestrator | skipping: [testbed-node-3] 2026-03-26 03:43:05.288414 | orchestrator | skipping: [testbed-node-4] 2026-03-26 03:43:05.288427 | orchestrator | skipping: [testbed-node-5] 2026-03-26 03:43:05.288439 | orchestrator | 2026-03-26 03:43:05.288452 | orchestrator | TASK [ceilometer : include_tasks] ********************************************** 2026-03-26 03:43:05.288465 | orchestrator | Thursday 26 March 2026 03:43:01 +0000 (0:00:00.651) 0:00:37.341 ******** 2026-03-26 03:43:05.288492 | orchestrator | included: /ansible/roles/ceilometer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-26 03:43:05.288506 | orchestrator | 2026-03-26 03:43:05.288519 | orchestrator | TASK [service-cert-copy : ceilometer | Copying over extra CA certificates] ***** 2026-03-26 03:43:05.288531 | orchestrator | Thursday 26 March 2026 03:43:02 +0000 (0:00:01.357) 0:00:38.699 ******** 2026-03-26 03:43:05.288543 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-26 03:43:05.288567 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-26 03:43:05.840231 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-26 03:43:05.840335 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-26 03:43:05.840373 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-26 03:43:05.840387 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-26 03:43:05.840422 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-26 03:43:05.840436 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-26 03:43:05.840464 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-26 03:43:05.840477 | orchestrator | 2026-03-26 03:43:05.840503 | orchestrator | TASK [service-cert-copy : ceilometer | Copying over backend internal TLS certificate] *** 2026-03-26 03:43:05.840516 | orchestrator | Thursday 26 March 2026 03:43:05 +0000 (0:00:02.407) 0:00:41.106 ******** 2026-03-26 03:43:05.840529 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-26 03:43:05.840547 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-26 03:43:05.840568 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:43:05.840581 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-26 03:43:05.840593 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-26 03:43:05.840605 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:43:05.840616 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-26 03:43:05.840636 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-26 03:43:07.808233 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:43:07.808345 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-26 03:43:07.808364 | orchestrator | skipping: [testbed-node-3] 2026-03-26 03:43:07.808395 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-26 03:43:07.808445 | orchestrator | skipping: [testbed-node-4] 2026-03-26 03:43:07.808459 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-26 03:43:07.808471 | orchestrator | skipping: [testbed-node-5] 2026-03-26 03:43:07.808481 | orchestrator | 2026-03-26 03:43:07.808493 | orchestrator | TASK [service-cert-copy : ceilometer | Copying over backend internal TLS key] *** 2026-03-26 03:43:07.808506 | orchestrator | Thursday 26 March 2026 03:43:06 +0000 (0:00:00.923) 0:00:42.029 ******** 2026-03-26 03:43:07.808519 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-26 03:43:07.808533 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-26 03:43:07.808565 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-26 03:43:07.808578 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-26 03:43:07.808605 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-26 03:43:07.808617 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-26 03:43:07.808629 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:43:07.808641 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:43:07.808652 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:43:07.808665 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-26 03:43:07.808677 | orchestrator | skipping: [testbed-node-3] 2026-03-26 03:43:07.808689 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-26 03:43:07.808702 | orchestrator | skipping: [testbed-node-4] 2026-03-26 03:43:07.808723 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-26 03:43:15.833659 | orchestrator | skipping: [testbed-node-5] 2026-03-26 03:43:15.833841 | orchestrator | 2026-03-26 03:43:15.833871 | orchestrator | TASK [ceilometer : Copying over config.json files for services] **************** 2026-03-26 03:43:15.833878 | orchestrator | Thursday 26 March 2026 03:43:07 +0000 (0:00:01.596) 0:00:43.626 ******** 2026-03-26 03:43:15.833896 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-26 03:43:15.833904 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-26 03:43:15.833908 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-26 03:43:15.833913 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-26 03:43:15.833919 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-26 03:43:15.833938 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-26 03:43:15.833950 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-26 03:43:15.833956 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-26 03:43:15.833960 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-26 03:43:15.833964 | orchestrator | 2026-03-26 03:43:15.833968 | orchestrator | TASK [ceilometer : Copying over ceilometer.conf] ******************************* 2026-03-26 03:43:15.833972 | orchestrator | Thursday 26 March 2026 03:43:10 +0000 (0:00:02.796) 0:00:46.422 ******** 2026-03-26 03:43:15.833976 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-26 03:43:15.833980 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-26 03:43:15.833988 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-26 03:43:25.683147 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-26 03:43:25.683266 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-26 03:43:25.683282 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-26 03:43:25.683293 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-26 03:43:25.683306 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-26 03:43:25.683316 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-26 03:43:25.683344 | orchestrator | 2026-03-26 03:43:25.683357 | orchestrator | TASK [ceilometer : Check custom event_definitions.yaml exists] ***************** 2026-03-26 03:43:25.683369 | orchestrator | Thursday 26 March 2026 03:43:15 +0000 (0:00:05.232) 0:00:51.654 ******** 2026-03-26 03:43:25.683393 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-26 03:43:25.683406 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-26 03:43:25.683415 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-26 03:43:25.683425 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-26 03:43:25.683435 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-26 03:43:25.683444 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-26 03:43:25.683454 | orchestrator | 2026-03-26 03:43:25.683464 | orchestrator | TASK [ceilometer : Copying over event_definitions.yaml] ************************ 2026-03-26 03:43:25.683474 | orchestrator | Thursday 26 March 2026 03:43:17 +0000 (0:00:01.732) 0:00:53.386 ******** 2026-03-26 03:43:25.683483 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:43:25.683493 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:43:25.683503 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:43:25.683512 | orchestrator | skipping: [testbed-node-3] 2026-03-26 03:43:25.683528 | orchestrator | skipping: [testbed-node-4] 2026-03-26 03:43:25.683538 | orchestrator | skipping: [testbed-node-5] 2026-03-26 03:43:25.683548 | orchestrator | 2026-03-26 03:43:25.683558 | orchestrator | TASK [ceilometer : Copying over event_definitions.yaml for notification service] *** 2026-03-26 03:43:25.683568 | orchestrator | Thursday 26 March 2026 03:43:18 +0000 (0:00:00.613) 0:00:54.000 ******** 2026-03-26 03:43:25.683578 | orchestrator | skipping: [testbed-node-3] 2026-03-26 03:43:25.683587 | orchestrator | skipping: [testbed-node-4] 2026-03-26 03:43:25.683597 | orchestrator | skipping: [testbed-node-5] 2026-03-26 03:43:25.683606 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:43:25.683616 | orchestrator | changed: [testbed-node-1] 2026-03-26 03:43:25.683625 | orchestrator | changed: [testbed-node-2] 2026-03-26 03:43:25.683635 | orchestrator | 2026-03-26 03:43:25.683644 | orchestrator | TASK [ceilometer : Copying over event_pipeline.yaml] *************************** 2026-03-26 03:43:25.683656 | orchestrator | Thursday 26 March 2026 03:43:19 +0000 (0:00:01.664) 0:00:55.665 ******** 2026-03-26 03:43:25.683666 | orchestrator | skipping: [testbed-node-3] 2026-03-26 03:43:25.683678 | orchestrator | skipping: [testbed-node-4] 2026-03-26 03:43:25.683689 | orchestrator | skipping: [testbed-node-5] 2026-03-26 03:43:25.683700 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:43:25.683710 | orchestrator | changed: [testbed-node-1] 2026-03-26 03:43:25.683722 | orchestrator | changed: [testbed-node-2] 2026-03-26 03:43:25.683732 | orchestrator | 2026-03-26 03:43:25.683743 | orchestrator | TASK [ceilometer : Check custom pipeline.yaml exists] ************************** 2026-03-26 03:43:25.683796 | orchestrator | Thursday 26 March 2026 03:43:21 +0000 (0:00:01.535) 0:00:57.201 ******** 2026-03-26 03:43:25.683809 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-26 03:43:25.683820 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-26 03:43:25.683831 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-26 03:43:25.683842 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-26 03:43:25.683853 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-26 03:43:25.683864 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-26 03:43:25.683875 | orchestrator | 2026-03-26 03:43:25.683886 | orchestrator | TASK [ceilometer : Copying over custom pipeline.yaml file] ********************* 2026-03-26 03:43:25.683905 | orchestrator | Thursday 26 March 2026 03:43:22 +0000 (0:00:01.620) 0:00:58.821 ******** 2026-03-26 03:43:25.683918 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-26 03:43:25.683930 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-26 03:43:25.683942 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-26 03:43:25.683966 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-26 03:43:26.553453 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-26 03:43:26.553547 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-26 03:43:26.553582 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-26 03:43:26.553595 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-26 03:43:26.553604 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-26 03:43:26.553615 | orchestrator | 2026-03-26 03:43:26.553626 | orchestrator | TASK [ceilometer : Copying over pipeline.yaml file] **************************** 2026-03-26 03:43:26.553637 | orchestrator | Thursday 26 March 2026 03:43:25 +0000 (0:00:02.674) 0:01:01.495 ******** 2026-03-26 03:43:26.553648 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-26 03:43:26.553686 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-26 03:43:26.553698 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:43:26.553708 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-26 03:43:26.553725 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-26 03:43:26.553734 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:43:26.553744 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-26 03:43:26.553806 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-26 03:43:26.553818 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:43:26.553828 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-26 03:43:26.553837 | orchestrator | skipping: [testbed-node-3] 2026-03-26 03:43:26.553858 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-26 03:43:29.912916 | orchestrator | skipping: [testbed-node-4] 2026-03-26 03:43:29.913065 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-26 03:43:29.913098 | orchestrator | skipping: [testbed-node-5] 2026-03-26 03:43:29.913119 | orchestrator | 2026-03-26 03:43:29.913139 | orchestrator | TASK [ceilometer : Copying VMware vCenter CA file] ***************************** 2026-03-26 03:43:29.913159 | orchestrator | Thursday 26 March 2026 03:43:26 +0000 (0:00:00.881) 0:01:02.377 ******** 2026-03-26 03:43:29.913178 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:43:29.913194 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:43:29.913211 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:43:29.913230 | orchestrator | skipping: [testbed-node-3] 2026-03-26 03:43:29.913249 | orchestrator | skipping: [testbed-node-4] 2026-03-26 03:43:29.913268 | orchestrator | skipping: [testbed-node-5] 2026-03-26 03:43:29.913286 | orchestrator | 2026-03-26 03:43:29.913306 | orchestrator | TASK [ceilometer : Copying over existing policy file] ************************** 2026-03-26 03:43:29.913324 | orchestrator | Thursday 26 March 2026 03:43:27 +0000 (0:00:00.836) 0:01:03.214 ******** 2026-03-26 03:43:29.913346 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-26 03:43:29.913370 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-26 03:43:29.913390 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:43:29.913403 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-26 03:43:29.913432 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-26 03:43:29.913470 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:43:29.913504 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-03-26 03:43:29.913517 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-26 03:43:29.913529 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:43:29.913540 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-26 03:43:29.913552 | orchestrator | skipping: [testbed-node-3] 2026-03-26 03:43:29.913570 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-26 03:43:29.913588 | orchestrator | skipping: [testbed-node-4] 2026-03-26 03:43:29.913608 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-03-26 03:43:29.913637 | orchestrator | skipping: [testbed-node-5] 2026-03-26 03:43:29.913657 | orchestrator | 2026-03-26 03:43:29.913685 | orchestrator | TASK [ceilometer : Check ceilometer containers] ******************************** 2026-03-26 03:43:29.913700 | orchestrator | Thursday 26 March 2026 03:43:28 +0000 (0:00:00.862) 0:01:04.076 ******** 2026-03-26 03:43:29.913722 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-26 03:43:59.791093 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-26 03:43:59.791222 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-03-26 03:43:59.791245 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-26 03:43:59.791261 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-26 03:43:59.791276 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-26 03:43:59.791317 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-03-26 03:43:59.791354 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-26 03:43:59.791371 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-03-26 03:43:59.791386 | orchestrator | 2026-03-26 03:43:59.791402 | orchestrator | TASK [ceilometer : include_tasks] ********************************************** 2026-03-26 03:43:59.791418 | orchestrator | Thursday 26 March 2026 03:43:29 +0000 (0:00:01.655) 0:01:05.731 ******** 2026-03-26 03:43:59.791432 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:43:59.791449 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:43:59.791458 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:43:59.791466 | orchestrator | skipping: [testbed-node-3] 2026-03-26 03:43:59.791474 | orchestrator | skipping: [testbed-node-4] 2026-03-26 03:43:59.791482 | orchestrator | skipping: [testbed-node-5] 2026-03-26 03:43:59.791489 | orchestrator | 2026-03-26 03:43:59.791498 | orchestrator | TASK [ceilometer : Running Ceilometer bootstrap container] ********************* 2026-03-26 03:43:59.791506 | orchestrator | Thursday 26 March 2026 03:43:30 +0000 (0:00:00.653) 0:01:06.385 ******** 2026-03-26 03:43:59.791514 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:43:59.791522 | orchestrator | 2026-03-26 03:43:59.791530 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-03-26 03:43:59.791538 | orchestrator | Thursday 26 March 2026 03:43:35 +0000 (0:00:04.792) 0:01:11.178 ******** 2026-03-26 03:43:59.791546 | orchestrator | 2026-03-26 03:43:59.791554 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-03-26 03:43:59.791562 | orchestrator | Thursday 26 March 2026 03:43:35 +0000 (0:00:00.073) 0:01:11.251 ******** 2026-03-26 03:43:59.791570 | orchestrator | 2026-03-26 03:43:59.791586 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-03-26 03:43:59.791594 | orchestrator | Thursday 26 March 2026 03:43:35 +0000 (0:00:00.109) 0:01:11.361 ******** 2026-03-26 03:43:59.791602 | orchestrator | 2026-03-26 03:43:59.791610 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-03-26 03:43:59.791618 | orchestrator | Thursday 26 March 2026 03:43:35 +0000 (0:00:00.254) 0:01:11.615 ******** 2026-03-26 03:43:59.791627 | orchestrator | 2026-03-26 03:43:59.791637 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-03-26 03:43:59.791650 | orchestrator | Thursday 26 March 2026 03:43:35 +0000 (0:00:00.071) 0:01:11.686 ******** 2026-03-26 03:43:59.791668 | orchestrator | 2026-03-26 03:43:59.791685 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-03-26 03:43:59.791697 | orchestrator | Thursday 26 March 2026 03:43:35 +0000 (0:00:00.070) 0:01:11.757 ******** 2026-03-26 03:43:59.791710 | orchestrator | 2026-03-26 03:43:59.791722 | orchestrator | RUNNING HANDLER [ceilometer : Restart ceilometer-notification container] ******* 2026-03-26 03:43:59.791734 | orchestrator | Thursday 26 March 2026 03:43:36 +0000 (0:00:00.074) 0:01:11.831 ******** 2026-03-26 03:43:59.791746 | orchestrator | changed: [testbed-node-1] 2026-03-26 03:43:59.791851 | orchestrator | changed: [testbed-node-2] 2026-03-26 03:43:59.791860 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:43:59.791868 | orchestrator | 2026-03-26 03:43:59.791876 | orchestrator | RUNNING HANDLER [ceilometer : Restart ceilometer-central container] ************ 2026-03-26 03:43:59.791884 | orchestrator | Thursday 26 March 2026 03:43:43 +0000 (0:00:07.594) 0:01:19.425 ******** 2026-03-26 03:43:59.791892 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:43:59.791900 | orchestrator | changed: [testbed-node-1] 2026-03-26 03:43:59.791908 | orchestrator | changed: [testbed-node-2] 2026-03-26 03:43:59.791916 | orchestrator | 2026-03-26 03:43:59.791924 | orchestrator | RUNNING HANDLER [ceilometer : Restart ceilometer-compute container] ************ 2026-03-26 03:43:59.791932 | orchestrator | Thursday 26 March 2026 03:43:48 +0000 (0:00:04.951) 0:01:24.377 ******** 2026-03-26 03:43:59.791940 | orchestrator | changed: [testbed-node-3] 2026-03-26 03:43:59.791948 | orchestrator | changed: [testbed-node-5] 2026-03-26 03:43:59.791956 | orchestrator | changed: [testbed-node-4] 2026-03-26 03:43:59.791965 | orchestrator | 2026-03-26 03:43:59.791978 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-26 03:43:59.791988 | orchestrator | testbed-node-0 : ok=29  changed=13  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-03-26 03:43:59.791997 | orchestrator | testbed-node-1 : ok=23  changed=10  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-26 03:43:59.792016 | orchestrator | testbed-node-2 : ok=23  changed=10  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-26 03:44:00.320202 | orchestrator | testbed-node-3 : ok=20  changed=7  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-03-26 03:44:00.320303 | orchestrator | testbed-node-4 : ok=20  changed=7  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-03-26 03:44:00.320316 | orchestrator | testbed-node-5 : ok=20  changed=7  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-03-26 03:44:00.320344 | orchestrator | 2026-03-26 03:44:00.320364 | orchestrator | 2026-03-26 03:44:00.320374 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-26 03:44:00.320385 | orchestrator | Thursday 26 March 2026 03:43:59 +0000 (0:00:11.229) 0:01:35.606 ******** 2026-03-26 03:44:00.320394 | orchestrator | =============================================================================== 2026-03-26 03:44:00.320403 | orchestrator | ceilometer : Restart ceilometer-compute container ---------------------- 11.23s 2026-03-26 03:44:00.320437 | orchestrator | ceilometer : Restart ceilometer-notification container ------------------ 7.59s 2026-03-26 03:44:00.320446 | orchestrator | ceilometer : Copying over ceilometer.conf ------------------------------- 5.23s 2026-03-26 03:44:00.320455 | orchestrator | ceilometer : Restart ceilometer-central container ----------------------- 4.95s 2026-03-26 03:44:00.320464 | orchestrator | ceilometer : Running Ceilometer bootstrap container --------------------- 4.79s 2026-03-26 03:44:00.320473 | orchestrator | service-ks-register : ceilometer | Granting user roles ------------------ 4.31s 2026-03-26 03:44:00.320482 | orchestrator | service-ks-register : ceilometer | Creating users ----------------------- 4.00s 2026-03-26 03:44:00.320490 | orchestrator | service-ks-register : ceilometer | Creating projects -------------------- 3.89s 2026-03-26 03:44:00.320499 | orchestrator | service-ks-register : ceilometer | Creating roles ----------------------- 3.23s 2026-03-26 03:44:00.320508 | orchestrator | ceilometer : Copying over config.json files for services ---------------- 2.80s 2026-03-26 03:44:00.320517 | orchestrator | ceilometer : Copying over custom pipeline.yaml file --------------------- 2.67s 2026-03-26 03:44:00.320526 | orchestrator | service-cert-copy : ceilometer | Copying over extra CA certificates ----- 2.41s 2026-03-26 03:44:00.320535 | orchestrator | ceilometer : Check custom event_definitions.yaml exists ----------------- 1.73s 2026-03-26 03:44:00.320543 | orchestrator | ceilometer : Check if the folder for custom meter definitions exist ----- 1.68s 2026-03-26 03:44:00.320552 | orchestrator | ceilometer : Copying over event_definitions.yaml for notification service --- 1.66s 2026-03-26 03:44:00.320562 | orchestrator | ceilometer : Check ceilometer containers -------------------------------- 1.66s 2026-03-26 03:44:00.320571 | orchestrator | ceilometer : Check if custom polling.yaml exists ------------------------ 1.63s 2026-03-26 03:44:00.320580 | orchestrator | ceilometer : Check custom pipeline.yaml exists -------------------------- 1.62s 2026-03-26 03:44:00.320588 | orchestrator | service-cert-copy : ceilometer | Copying over backend internal TLS key --- 1.60s 2026-03-26 03:44:00.320597 | orchestrator | ceilometer : Copying over event_pipeline.yaml --------------------------- 1.54s 2026-03-26 03:44:02.835550 | orchestrator | 2026-03-26 03:44:02 | INFO  | Task dff533f4-b9a6-47d4-badc-74d13e9ba245 (aodh) was prepared for execution. 2026-03-26 03:44:02.835654 | orchestrator | 2026-03-26 03:44:02 | INFO  | It takes a moment until task dff533f4-b9a6-47d4-badc-74d13e9ba245 (aodh) has been started and output is visible here. 2026-03-26 03:44:36.053981 | orchestrator | 2026-03-26 03:44:36.054115 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-26 03:44:36.054126 | orchestrator | 2026-03-26 03:44:36.054133 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-26 03:44:36.054138 | orchestrator | Thursday 26 March 2026 03:44:07 +0000 (0:00:00.286) 0:00:00.286 ******** 2026-03-26 03:44:36.054144 | orchestrator | ok: [testbed-node-0] 2026-03-26 03:44:36.054150 | orchestrator | ok: [testbed-node-1] 2026-03-26 03:44:36.054156 | orchestrator | ok: [testbed-node-2] 2026-03-26 03:44:36.054161 | orchestrator | 2026-03-26 03:44:36.054166 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-26 03:44:36.054172 | orchestrator | Thursday 26 March 2026 03:44:07 +0000 (0:00:00.309) 0:00:00.596 ******** 2026-03-26 03:44:36.054177 | orchestrator | ok: [testbed-node-0] => (item=enable_aodh_True) 2026-03-26 03:44:36.054183 | orchestrator | ok: [testbed-node-1] => (item=enable_aodh_True) 2026-03-26 03:44:36.054189 | orchestrator | ok: [testbed-node-2] => (item=enable_aodh_True) 2026-03-26 03:44:36.054194 | orchestrator | 2026-03-26 03:44:36.054199 | orchestrator | PLAY [Apply role aodh] ********************************************************* 2026-03-26 03:44:36.054204 | orchestrator | 2026-03-26 03:44:36.054210 | orchestrator | TASK [aodh : include_tasks] **************************************************** 2026-03-26 03:44:36.054215 | orchestrator | Thursday 26 March 2026 03:44:08 +0000 (0:00:00.450) 0:00:01.046 ******** 2026-03-26 03:44:36.054221 | orchestrator | included: /ansible/roles/aodh/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 03:44:36.054226 | orchestrator | 2026-03-26 03:44:36.054252 | orchestrator | TASK [service-ks-register : aodh | Creating services] ************************** 2026-03-26 03:44:36.054257 | orchestrator | Thursday 26 March 2026 03:44:08 +0000 (0:00:00.622) 0:00:01.669 ******** 2026-03-26 03:44:36.054263 | orchestrator | changed: [testbed-node-0] => (item=aodh (alarming)) 2026-03-26 03:44:36.054268 | orchestrator | 2026-03-26 03:44:36.054273 | orchestrator | TASK [service-ks-register : aodh | Creating endpoints] ************************* 2026-03-26 03:44:36.054279 | orchestrator | Thursday 26 March 2026 03:44:12 +0000 (0:00:03.760) 0:00:05.430 ******** 2026-03-26 03:44:36.054284 | orchestrator | changed: [testbed-node-0] => (item=aodh -> https://api-int.testbed.osism.xyz:8042 -> internal) 2026-03-26 03:44:36.054290 | orchestrator | changed: [testbed-node-0] => (item=aodh -> https://api.testbed.osism.xyz:8042 -> public) 2026-03-26 03:44:36.054295 | orchestrator | 2026-03-26 03:44:36.054300 | orchestrator | TASK [service-ks-register : aodh | Creating projects] ************************** 2026-03-26 03:44:36.054305 | orchestrator | Thursday 26 March 2026 03:44:19 +0000 (0:00:06.645) 0:00:12.076 ******** 2026-03-26 03:44:36.054311 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-26 03:44:36.054317 | orchestrator | 2026-03-26 03:44:36.054322 | orchestrator | TASK [service-ks-register : aodh | Creating users] ***************************** 2026-03-26 03:44:36.054327 | orchestrator | Thursday 26 March 2026 03:44:22 +0000 (0:00:03.452) 0:00:15.529 ******** 2026-03-26 03:44:36.054333 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-26 03:44:36.054338 | orchestrator | changed: [testbed-node-0] => (item=aodh -> service) 2026-03-26 03:44:36.054343 | orchestrator | 2026-03-26 03:44:36.054348 | orchestrator | TASK [service-ks-register : aodh | Creating roles] ***************************** 2026-03-26 03:44:36.054353 | orchestrator | Thursday 26 March 2026 03:44:26 +0000 (0:00:03.925) 0:00:19.454 ******** 2026-03-26 03:44:36.054359 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-26 03:44:36.054364 | orchestrator | 2026-03-26 03:44:36.054369 | orchestrator | TASK [service-ks-register : aodh | Granting user roles] ************************ 2026-03-26 03:44:36.054374 | orchestrator | Thursday 26 March 2026 03:44:29 +0000 (0:00:03.429) 0:00:22.884 ******** 2026-03-26 03:44:36.054379 | orchestrator | changed: [testbed-node-0] => (item=aodh -> service -> admin) 2026-03-26 03:44:36.054384 | orchestrator | 2026-03-26 03:44:36.054389 | orchestrator | TASK [aodh : Ensuring config directories exist] ******************************** 2026-03-26 03:44:36.054394 | orchestrator | Thursday 26 March 2026 03:44:33 +0000 (0:00:04.000) 0:00:26.885 ******** 2026-03-26 03:44:36.054402 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-26 03:44:36.054424 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-26 03:44:36.054435 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-26 03:44:36.054442 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-03-26 03:44:36.054448 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-03-26 03:44:36.054456 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-03-26 03:44:36.054465 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-03-26 03:44:36.054480 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-03-26 03:44:37.423457 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-03-26 03:44:37.423526 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-03-26 03:44:37.423534 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-03-26 03:44:37.423538 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-03-26 03:44:37.423544 | orchestrator | 2026-03-26 03:44:37.423549 | orchestrator | TASK [aodh : Check if policies shall be overwritten] *************************** 2026-03-26 03:44:37.423555 | orchestrator | Thursday 26 March 2026 03:44:36 +0000 (0:00:02.130) 0:00:29.016 ******** 2026-03-26 03:44:37.423559 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:44:37.423565 | orchestrator | 2026-03-26 03:44:37.423569 | orchestrator | TASK [aodh : Set aodh policy file] ********************************************* 2026-03-26 03:44:37.423573 | orchestrator | Thursday 26 March 2026 03:44:36 +0000 (0:00:00.132) 0:00:29.148 ******** 2026-03-26 03:44:37.423578 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:44:37.423582 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:44:37.423586 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:44:37.423590 | orchestrator | 2026-03-26 03:44:37.423594 | orchestrator | TASK [aodh : Copying over existing policy file] ******************************** 2026-03-26 03:44:37.423598 | orchestrator | Thursday 26 March 2026 03:44:36 +0000 (0:00:00.564) 0:00:29.712 ******** 2026-03-26 03:44:37.423604 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-26 03:44:37.423638 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-26 03:44:37.423644 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-26 03:44:37.423648 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-26 03:44:37.423653 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:44:37.423657 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-26 03:44:37.423662 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-26 03:44:37.423666 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-26 03:44:37.423679 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-26 03:44:42.687442 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:44:42.687556 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-26 03:44:42.687576 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-26 03:44:42.687590 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-26 03:44:42.687602 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-26 03:44:42.687614 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:44:42.687626 | orchestrator | 2026-03-26 03:44:42.687638 | orchestrator | TASK [aodh : include_tasks] **************************************************** 2026-03-26 03:44:42.687652 | orchestrator | Thursday 26 March 2026 03:44:37 +0000 (0:00:00.677) 0:00:30.390 ******** 2026-03-26 03:44:42.687743 | orchestrator | included: /ansible/roles/aodh/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 03:44:42.687822 | orchestrator | 2026-03-26 03:44:42.687835 | orchestrator | TASK [service-cert-copy : aodh | Copying over extra CA certificates] *********** 2026-03-26 03:44:42.687846 | orchestrator | Thursday 26 March 2026 03:44:38 +0000 (0:00:00.801) 0:00:31.192 ******** 2026-03-26 03:44:42.687858 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-26 03:44:42.687901 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-26 03:44:42.687930 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-26 03:44:42.687949 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-03-26 03:44:42.687969 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-03-26 03:44:42.688033 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-03-26 03:44:42.688057 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-03-26 03:44:42.688116 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-03-26 03:44:43.375819 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-03-26 03:44:43.375915 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-03-26 03:44:43.375931 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-03-26 03:44:43.375943 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-03-26 03:44:43.376011 | orchestrator | 2026-03-26 03:44:43.376026 | orchestrator | TASK [service-cert-copy : aodh | Copying over backend internal TLS certificate] *** 2026-03-26 03:44:43.376038 | orchestrator | Thursday 26 March 2026 03:44:42 +0000 (0:00:04.462) 0:00:35.654 ******** 2026-03-26 03:44:43.376052 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-26 03:44:43.376065 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-26 03:44:43.376093 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-26 03:44:43.376105 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-26 03:44:43.376117 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:44:43.376154 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-26 03:44:43.376190 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-26 03:44:43.376202 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-26 03:44:43.376214 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-26 03:44:43.376225 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:44:43.376245 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-26 03:44:44.480609 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-26 03:44:44.480702 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-26 03:44:44.480825 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-26 03:44:44.480837 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:44:44.480845 | orchestrator | 2026-03-26 03:44:44.480853 | orchestrator | TASK [service-cert-copy : aodh | Copying over backend internal TLS key] ******** 2026-03-26 03:44:44.480862 | orchestrator | Thursday 26 March 2026 03:44:43 +0000 (0:00:00.684) 0:00:36.338 ******** 2026-03-26 03:44:44.480869 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-26 03:44:44.480877 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-26 03:44:44.480883 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-26 03:44:44.480907 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-26 03:44:44.480913 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:44:44.480944 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-26 03:44:44.480951 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-26 03:44:44.480958 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-26 03:44:44.480964 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-26 03:44:44.480971 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:44:44.480984 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-26 03:44:48.682372 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-26 03:44:48.682558 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-26 03:44:48.682588 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-26 03:44:48.682602 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:44:48.682616 | orchestrator | 2026-03-26 03:44:48.682628 | orchestrator | TASK [aodh : Copying over config.json files for services] ********************** 2026-03-26 03:44:48.682642 | orchestrator | Thursday 26 March 2026 03:44:44 +0000 (0:00:01.107) 0:00:37.445 ******** 2026-03-26 03:44:48.682664 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-26 03:44:48.682694 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-26 03:44:48.682744 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-26 03:44:48.682914 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-03-26 03:44:48.682932 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-03-26 03:44:48.682946 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-03-26 03:44:48.682959 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-03-26 03:44:48.682973 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-03-26 03:44:48.682986 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-03-26 03:44:48.683032 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-03-26 03:44:57.382707 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-03-26 03:44:57.382869 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-03-26 03:44:57.382909 | orchestrator | 2026-03-26 03:44:57.382922 | orchestrator | TASK [aodh : Copying over aodh.conf] ******************************************* 2026-03-26 03:44:57.382934 | orchestrator | Thursday 26 March 2026 03:44:48 +0000 (0:00:04.195) 0:00:41.641 ******** 2026-03-26 03:44:57.382945 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-26 03:44:57.382956 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-26 03:44:57.382965 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-26 03:44:57.383018 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-03-26 03:44:57.383026 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-03-26 03:44:57.383032 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-03-26 03:44:57.383038 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-03-26 03:44:57.383044 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-03-26 03:44:57.383050 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-03-26 03:44:57.383061 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-03-26 03:44:57.383072 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-03-26 03:45:02.556579 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-03-26 03:45:02.556707 | orchestrator | 2026-03-26 03:45:02.556730 | orchestrator | TASK [aodh : Copying over wsgi-aodh files for services] ************************ 2026-03-26 03:45:02.556747 | orchestrator | Thursday 26 March 2026 03:44:57 +0000 (0:00:08.707) 0:00:50.348 ******** 2026-03-26 03:45:02.556844 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:45:02.556855 | orchestrator | changed: [testbed-node-1] 2026-03-26 03:45:02.556869 | orchestrator | changed: [testbed-node-2] 2026-03-26 03:45:02.556882 | orchestrator | 2026-03-26 03:45:02.556897 | orchestrator | TASK [aodh : Check aodh containers] ******************************************** 2026-03-26 03:45:02.556909 | orchestrator | Thursday 26 March 2026 03:44:59 +0000 (0:00:01.790) 0:00:52.138 ******** 2026-03-26 03:45:02.556923 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-26 03:45:02.556938 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-26 03:45:02.556980 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-26 03:45:02.557016 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-03-26 03:45:02.557030 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-03-26 03:45:02.557043 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-03-26 03:45:02.557056 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-03-26 03:45:02.557070 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-03-26 03:45:02.557094 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-03-26 03:45:02.557107 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-03-26 03:45:02.557129 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-03-26 03:46:01.814461 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-03-26 03:46:01.814555 | orchestrator | 2026-03-26 03:46:01.814564 | orchestrator | TASK [aodh : include_tasks] **************************************************** 2026-03-26 03:46:01.814572 | orchestrator | Thursday 26 March 2026 03:45:02 +0000 (0:00:03.375) 0:00:55.514 ******** 2026-03-26 03:46:01.814579 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:46:01.814586 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:46:01.814593 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:46:01.814599 | orchestrator | 2026-03-26 03:46:01.814605 | orchestrator | TASK [aodh : Creating aodh database] ******************************************* 2026-03-26 03:46:01.814612 | orchestrator | Thursday 26 March 2026 03:45:02 +0000 (0:00:00.335) 0:00:55.849 ******** 2026-03-26 03:46:01.814619 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:46:01.814625 | orchestrator | 2026-03-26 03:46:01.814631 | orchestrator | TASK [aodh : Creating aodh database user and setting permissions] ************** 2026-03-26 03:46:01.814637 | orchestrator | Thursday 26 March 2026 03:45:05 +0000 (0:00:02.153) 0:00:58.003 ******** 2026-03-26 03:46:01.814643 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:46:01.814669 | orchestrator | 2026-03-26 03:46:01.814676 | orchestrator | TASK [aodh : Running aodh bootstrap container] ********************************* 2026-03-26 03:46:01.814681 | orchestrator | Thursday 26 March 2026 03:45:07 +0000 (0:00:02.269) 0:01:00.273 ******** 2026-03-26 03:46:01.814687 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:46:01.814692 | orchestrator | 2026-03-26 03:46:01.814698 | orchestrator | TASK [aodh : Flush handlers] *************************************************** 2026-03-26 03:46:01.814703 | orchestrator | Thursday 26 March 2026 03:45:20 +0000 (0:00:13.524) 0:01:13.797 ******** 2026-03-26 03:46:01.814710 | orchestrator | 2026-03-26 03:46:01.814716 | orchestrator | TASK [aodh : Flush handlers] *************************************************** 2026-03-26 03:46:01.814721 | orchestrator | Thursday 26 March 2026 03:45:20 +0000 (0:00:00.087) 0:01:13.884 ******** 2026-03-26 03:46:01.814727 | orchestrator | 2026-03-26 03:46:01.814733 | orchestrator | TASK [aodh : Flush handlers] *************************************************** 2026-03-26 03:46:01.814739 | orchestrator | Thursday 26 March 2026 03:45:20 +0000 (0:00:00.072) 0:01:13.957 ******** 2026-03-26 03:46:01.814745 | orchestrator | 2026-03-26 03:46:01.814805 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-api container] **************************** 2026-03-26 03:46:01.814812 | orchestrator | Thursday 26 March 2026 03:45:21 +0000 (0:00:00.289) 0:01:14.246 ******** 2026-03-26 03:46:01.814819 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:46:01.814825 | orchestrator | changed: [testbed-node-2] 2026-03-26 03:46:01.814831 | orchestrator | changed: [testbed-node-1] 2026-03-26 03:46:01.814837 | orchestrator | 2026-03-26 03:46:01.814843 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-evaluator container] ********************** 2026-03-26 03:46:01.814848 | orchestrator | Thursday 26 March 2026 03:45:31 +0000 (0:00:10.640) 0:01:24.887 ******** 2026-03-26 03:46:01.814854 | orchestrator | changed: [testbed-node-1] 2026-03-26 03:46:01.814860 | orchestrator | changed: [testbed-node-2] 2026-03-26 03:46:01.814866 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:46:01.814872 | orchestrator | 2026-03-26 03:46:01.814878 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-listener container] *********************** 2026-03-26 03:46:01.814884 | orchestrator | Thursday 26 March 2026 03:45:40 +0000 (0:00:08.230) 0:01:33.118 ******** 2026-03-26 03:46:01.814890 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:46:01.814896 | orchestrator | changed: [testbed-node-1] 2026-03-26 03:46:01.814903 | orchestrator | changed: [testbed-node-2] 2026-03-26 03:46:01.814908 | orchestrator | 2026-03-26 03:46:01.814915 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-notifier container] *********************** 2026-03-26 03:46:01.814921 | orchestrator | Thursday 26 March 2026 03:45:50 +0000 (0:00:10.712) 0:01:43.831 ******** 2026-03-26 03:46:01.814927 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:46:01.814933 | orchestrator | changed: [testbed-node-1] 2026-03-26 03:46:01.814939 | orchestrator | changed: [testbed-node-2] 2026-03-26 03:46:01.814944 | orchestrator | 2026-03-26 03:46:01.814950 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-26 03:46:01.814957 | orchestrator | testbed-node-0 : ok=23  changed=17  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-26 03:46:01.814966 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-26 03:46:01.814972 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-26 03:46:01.814978 | orchestrator | 2026-03-26 03:46:01.814984 | orchestrator | 2026-03-26 03:46:01.814990 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-26 03:46:01.814996 | orchestrator | Thursday 26 March 2026 03:46:01 +0000 (0:00:10.559) 0:01:54.390 ******** 2026-03-26 03:46:01.815002 | orchestrator | =============================================================================== 2026-03-26 03:46:01.815008 | orchestrator | aodh : Running aodh bootstrap container -------------------------------- 13.52s 2026-03-26 03:46:01.815014 | orchestrator | aodh : Restart aodh-listener container --------------------------------- 10.71s 2026-03-26 03:46:01.815038 | orchestrator | aodh : Restart aodh-api container -------------------------------------- 10.64s 2026-03-26 03:46:01.815044 | orchestrator | aodh : Restart aodh-notifier container --------------------------------- 10.56s 2026-03-26 03:46:01.815051 | orchestrator | aodh : Copying over aodh.conf ------------------------------------------- 8.71s 2026-03-26 03:46:01.815057 | orchestrator | aodh : Restart aodh-evaluator container --------------------------------- 8.23s 2026-03-26 03:46:01.815063 | orchestrator | service-ks-register : aodh | Creating endpoints ------------------------- 6.65s 2026-03-26 03:46:01.815069 | orchestrator | service-cert-copy : aodh | Copying over extra CA certificates ----------- 4.46s 2026-03-26 03:46:01.815075 | orchestrator | aodh : Copying over config.json files for services ---------------------- 4.20s 2026-03-26 03:46:01.815081 | orchestrator | service-ks-register : aodh | Granting user roles ------------------------ 4.00s 2026-03-26 03:46:01.815087 | orchestrator | service-ks-register : aodh | Creating users ----------------------------- 3.93s 2026-03-26 03:46:01.815093 | orchestrator | service-ks-register : aodh | Creating services -------------------------- 3.76s 2026-03-26 03:46:01.815099 | orchestrator | service-ks-register : aodh | Creating projects -------------------------- 3.45s 2026-03-26 03:46:01.815106 | orchestrator | service-ks-register : aodh | Creating roles ----------------------------- 3.43s 2026-03-26 03:46:01.815112 | orchestrator | aodh : Check aodh containers -------------------------------------------- 3.38s 2026-03-26 03:46:01.815117 | orchestrator | aodh : Creating aodh database user and setting permissions -------------- 2.27s 2026-03-26 03:46:01.815124 | orchestrator | aodh : Creating aodh database ------------------------------------------- 2.15s 2026-03-26 03:46:01.815130 | orchestrator | aodh : Ensuring config directories exist -------------------------------- 2.13s 2026-03-26 03:46:01.815136 | orchestrator | aodh : Copying over wsgi-aodh files for services ------------------------ 1.79s 2026-03-26 03:46:01.815142 | orchestrator | service-cert-copy : aodh | Copying over backend internal TLS key -------- 1.11s 2026-03-26 03:46:04.261238 | orchestrator | 2026-03-26 03:46:04 | INFO  | Task ac547b8e-a406-49a2-abee-9b9679420316 (kolla-ceph-rgw) was prepared for execution. 2026-03-26 03:46:04.261395 | orchestrator | 2026-03-26 03:46:04 | INFO  | It takes a moment until task ac547b8e-a406-49a2-abee-9b9679420316 (kolla-ceph-rgw) has been started and output is visible here. 2026-03-26 03:46:42.075454 | orchestrator | 2026-03-26 03:46:42.075596 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-26 03:46:42.075616 | orchestrator | 2026-03-26 03:46:42.075629 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-26 03:46:42.075641 | orchestrator | Thursday 26 March 2026 03:46:08 +0000 (0:00:00.316) 0:00:00.316 ******** 2026-03-26 03:46:42.075653 | orchestrator | ok: [testbed-manager] 2026-03-26 03:46:42.075665 | orchestrator | ok: [testbed-node-0] 2026-03-26 03:46:42.075676 | orchestrator | ok: [testbed-node-1] 2026-03-26 03:46:42.075687 | orchestrator | ok: [testbed-node-2] 2026-03-26 03:46:42.075698 | orchestrator | ok: [testbed-node-3] 2026-03-26 03:46:42.075709 | orchestrator | ok: [testbed-node-4] 2026-03-26 03:46:42.075719 | orchestrator | ok: [testbed-node-5] 2026-03-26 03:46:42.075731 | orchestrator | 2026-03-26 03:46:42.075742 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-26 03:46:42.075843 | orchestrator | Thursday 26 March 2026 03:46:09 +0000 (0:00:00.955) 0:00:01.272 ******** 2026-03-26 03:46:42.075861 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2026-03-26 03:46:42.075872 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2026-03-26 03:46:42.075884 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2026-03-26 03:46:42.075895 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2026-03-26 03:46:42.075907 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2026-03-26 03:46:42.075927 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2026-03-26 03:46:42.075945 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2026-03-26 03:46:42.075996 | orchestrator | 2026-03-26 03:46:42.076017 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-03-26 03:46:42.076037 | orchestrator | 2026-03-26 03:46:42.076053 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2026-03-26 03:46:42.076066 | orchestrator | Thursday 26 March 2026 03:46:10 +0000 (0:00:00.822) 0:00:02.094 ******** 2026-03-26 03:46:42.076080 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-26 03:46:42.076094 | orchestrator | 2026-03-26 03:46:42.076107 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2026-03-26 03:46:42.076120 | orchestrator | Thursday 26 March 2026 03:46:12 +0000 (0:00:01.656) 0:00:03.751 ******** 2026-03-26 03:46:42.076134 | orchestrator | changed: [testbed-manager] => (item=swift (object-store)) 2026-03-26 03:46:42.076147 | orchestrator | 2026-03-26 03:46:42.076160 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2026-03-26 03:46:42.076172 | orchestrator | Thursday 26 March 2026 03:46:16 +0000 (0:00:03.922) 0:00:07.673 ******** 2026-03-26 03:46:42.076233 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2026-03-26 03:46:42.076259 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2026-03-26 03:46:42.076279 | orchestrator | 2026-03-26 03:46:42.076293 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2026-03-26 03:46:42.076305 | orchestrator | Thursday 26 March 2026 03:46:22 +0000 (0:00:06.506) 0:00:14.180 ******** 2026-03-26 03:46:42.076323 | orchestrator | ok: [testbed-manager] => (item=service) 2026-03-26 03:46:42.076341 | orchestrator | 2026-03-26 03:46:42.076359 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2026-03-26 03:46:42.076377 | orchestrator | Thursday 26 March 2026 03:46:25 +0000 (0:00:03.325) 0:00:17.505 ******** 2026-03-26 03:46:42.076394 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-26 03:46:42.076412 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service) 2026-03-26 03:46:42.076429 | orchestrator | 2026-03-26 03:46:42.076446 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2026-03-26 03:46:42.076464 | orchestrator | Thursday 26 March 2026 03:46:29 +0000 (0:00:04.015) 0:00:21.521 ******** 2026-03-26 03:46:42.076480 | orchestrator | ok: [testbed-manager] => (item=admin) 2026-03-26 03:46:42.076496 | orchestrator | changed: [testbed-manager] => (item=ResellerAdmin) 2026-03-26 03:46:42.076512 | orchestrator | 2026-03-26 03:46:42.076528 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2026-03-26 03:46:42.076545 | orchestrator | Thursday 26 March 2026 03:46:36 +0000 (0:00:06.487) 0:00:28.008 ******** 2026-03-26 03:46:42.076561 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service -> admin) 2026-03-26 03:46:42.076579 | orchestrator | 2026-03-26 03:46:42.076597 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-26 03:46:42.076615 | orchestrator | testbed-manager : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-26 03:46:42.076634 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-26 03:46:42.076655 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-26 03:46:42.076675 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-26 03:46:42.076696 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-26 03:46:42.076788 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-26 03:46:42.076814 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-26 03:46:42.076833 | orchestrator | 2026-03-26 03:46:42.076852 | orchestrator | 2026-03-26 03:46:42.076872 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-26 03:46:42.076891 | orchestrator | Thursday 26 March 2026 03:46:41 +0000 (0:00:05.109) 0:00:33.118 ******** 2026-03-26 03:46:42.076911 | orchestrator | =============================================================================== 2026-03-26 03:46:42.076930 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 6.51s 2026-03-26 03:46:42.076948 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 6.49s 2026-03-26 03:46:42.076967 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 5.11s 2026-03-26 03:46:42.076985 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 4.02s 2026-03-26 03:46:42.077003 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 3.92s 2026-03-26 03:46:42.077021 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.33s 2026-03-26 03:46:42.077039 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.66s 2026-03-26 03:46:42.077056 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.96s 2026-03-26 03:46:42.077074 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.82s 2026-03-26 03:46:44.601067 | orchestrator | 2026-03-26 03:46:44 | INFO  | Task cab4e368-13de-4ef3-b76d-5a075b924f38 (gnocchi) was prepared for execution. 2026-03-26 03:46:44.601145 | orchestrator | 2026-03-26 03:46:44 | INFO  | It takes a moment until task cab4e368-13de-4ef3-b76d-5a075b924f38 (gnocchi) has been started and output is visible here. 2026-03-26 03:46:50.244030 | orchestrator | 2026-03-26 03:46:50.244145 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-26 03:46:50.244158 | orchestrator | 2026-03-26 03:46:50.244166 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-26 03:46:50.244173 | orchestrator | Thursday 26 March 2026 03:46:49 +0000 (0:00:00.288) 0:00:00.288 ******** 2026-03-26 03:46:50.244180 | orchestrator | ok: [testbed-node-0] 2026-03-26 03:46:50.244188 | orchestrator | ok: [testbed-node-1] 2026-03-26 03:46:50.244194 | orchestrator | ok: [testbed-node-2] 2026-03-26 03:46:50.244201 | orchestrator | 2026-03-26 03:46:50.244207 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-26 03:46:50.244214 | orchestrator | Thursday 26 March 2026 03:46:49 +0000 (0:00:00.336) 0:00:00.625 ******** 2026-03-26 03:46:50.244268 | orchestrator | ok: [testbed-node-0] => (item=enable_gnocchi_False) 2026-03-26 03:46:50.244277 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_gnocchi_True 2026-03-26 03:46:50.244285 | orchestrator | ok: [testbed-node-1] => (item=enable_gnocchi_False) 2026-03-26 03:46:50.244292 | orchestrator | ok: [testbed-node-2] => (item=enable_gnocchi_False) 2026-03-26 03:46:50.244298 | orchestrator | 2026-03-26 03:46:50.244305 | orchestrator | PLAY [Apply role gnocchi] ****************************************************** 2026-03-26 03:46:50.244312 | orchestrator | skipping: no hosts matched 2026-03-26 03:46:50.244319 | orchestrator | 2026-03-26 03:46:50.244326 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-26 03:46:50.244333 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-26 03:46:50.244341 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-26 03:46:50.244347 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-26 03:46:50.244372 | orchestrator | 2026-03-26 03:46:50.244379 | orchestrator | 2026-03-26 03:46:50.244385 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-26 03:46:50.244392 | orchestrator | Thursday 26 March 2026 03:46:49 +0000 (0:00:00.415) 0:00:01.040 ******** 2026-03-26 03:46:50.244398 | orchestrator | =============================================================================== 2026-03-26 03:46:50.244404 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.42s 2026-03-26 03:46:50.244411 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.34s 2026-03-26 03:46:52.721940 | orchestrator | 2026-03-26 03:46:52 | INFO  | Task 4dad45d8-ad3c-4a11-bb8f-ae77d3a40d56 (manila) was prepared for execution. 2026-03-26 03:46:52.722093 | orchestrator | 2026-03-26 03:46:52 | INFO  | It takes a moment until task 4dad45d8-ad3c-4a11-bb8f-ae77d3a40d56 (manila) has been started and output is visible here. 2026-03-26 03:47:35.865122 | orchestrator | 2026-03-26 03:47:35.865248 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-26 03:47:35.865273 | orchestrator | 2026-03-26 03:47:35.865291 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-26 03:47:35.865304 | orchestrator | Thursday 26 March 2026 03:46:57 +0000 (0:00:00.291) 0:00:00.291 ******** 2026-03-26 03:47:35.865313 | orchestrator | ok: [testbed-node-0] 2026-03-26 03:47:35.865323 | orchestrator | ok: [testbed-node-1] 2026-03-26 03:47:35.865332 | orchestrator | ok: [testbed-node-2] 2026-03-26 03:47:35.865341 | orchestrator | 2026-03-26 03:47:35.865350 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-26 03:47:35.865359 | orchestrator | Thursday 26 March 2026 03:46:57 +0000 (0:00:00.320) 0:00:00.611 ******** 2026-03-26 03:47:35.865368 | orchestrator | ok: [testbed-node-0] => (item=enable_manila_True) 2026-03-26 03:47:35.865377 | orchestrator | ok: [testbed-node-1] => (item=enable_manila_True) 2026-03-26 03:47:35.865386 | orchestrator | ok: [testbed-node-2] => (item=enable_manila_True) 2026-03-26 03:47:35.865395 | orchestrator | 2026-03-26 03:47:35.865403 | orchestrator | PLAY [Apply role manila] ******************************************************* 2026-03-26 03:47:35.865412 | orchestrator | 2026-03-26 03:47:35.865421 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-03-26 03:47:35.865430 | orchestrator | Thursday 26 March 2026 03:46:57 +0000 (0:00:00.466) 0:00:01.078 ******** 2026-03-26 03:47:35.865438 | orchestrator | included: /ansible/roles/manila/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 03:47:35.865448 | orchestrator | 2026-03-26 03:47:35.865457 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-03-26 03:47:35.865466 | orchestrator | Thursday 26 March 2026 03:46:58 +0000 (0:00:00.575) 0:00:01.654 ******** 2026-03-26 03:47:35.865475 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:47:35.865485 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:47:35.865493 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:47:35.865502 | orchestrator | 2026-03-26 03:47:35.865511 | orchestrator | TASK [service-ks-register : manila | Creating services] ************************ 2026-03-26 03:47:35.865520 | orchestrator | Thursday 26 March 2026 03:46:59 +0000 (0:00:00.494) 0:00:02.148 ******** 2026-03-26 03:47:35.865529 | orchestrator | changed: [testbed-node-0] => (item=manila (share)) 2026-03-26 03:47:35.865537 | orchestrator | changed: [testbed-node-0] => (item=manilav2 (sharev2)) 2026-03-26 03:47:35.865546 | orchestrator | 2026-03-26 03:47:35.865555 | orchestrator | TASK [service-ks-register : manila | Creating endpoints] *********************** 2026-03-26 03:47:35.865564 | orchestrator | Thursday 26 March 2026 03:47:05 +0000 (0:00:06.592) 0:00:08.741 ******** 2026-03-26 03:47:35.865573 | orchestrator | changed: [testbed-node-0] => (item=manila -> https://api-int.testbed.osism.xyz:8786/v1/%(tenant_id)s -> internal) 2026-03-26 03:47:35.865582 | orchestrator | changed: [testbed-node-0] => (item=manila -> https://api.testbed.osism.xyz:8786/v1/%(tenant_id)s -> public) 2026-03-26 03:47:35.865614 | orchestrator | changed: [testbed-node-0] => (item=manilav2 -> https://api-int.testbed.osism.xyz:8786/v2 -> internal) 2026-03-26 03:47:35.865623 | orchestrator | changed: [testbed-node-0] => (item=manilav2 -> https://api.testbed.osism.xyz:8786/v2 -> public) 2026-03-26 03:47:35.865632 | orchestrator | 2026-03-26 03:47:35.865642 | orchestrator | TASK [service-ks-register : manila | Creating projects] ************************ 2026-03-26 03:47:35.865658 | orchestrator | Thursday 26 March 2026 03:47:18 +0000 (0:00:13.260) 0:00:22.002 ******** 2026-03-26 03:47:35.865673 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-26 03:47:35.865688 | orchestrator | 2026-03-26 03:47:35.865703 | orchestrator | TASK [service-ks-register : manila | Creating users] *************************** 2026-03-26 03:47:35.865718 | orchestrator | Thursday 26 March 2026 03:47:22 +0000 (0:00:03.416) 0:00:25.418 ******** 2026-03-26 03:47:35.865734 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-26 03:47:35.865750 | orchestrator | changed: [testbed-node-0] => (item=manila -> service) 2026-03-26 03:47:35.865792 | orchestrator | 2026-03-26 03:47:35.865809 | orchestrator | TASK [service-ks-register : manila | Creating roles] *************************** 2026-03-26 03:47:35.865824 | orchestrator | Thursday 26 March 2026 03:47:26 +0000 (0:00:03.981) 0:00:29.400 ******** 2026-03-26 03:47:35.865840 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-26 03:47:35.865855 | orchestrator | 2026-03-26 03:47:35.865871 | orchestrator | TASK [service-ks-register : manila | Granting user roles] ********************** 2026-03-26 03:47:35.865880 | orchestrator | Thursday 26 March 2026 03:47:29 +0000 (0:00:03.226) 0:00:32.626 ******** 2026-03-26 03:47:35.865895 | orchestrator | changed: [testbed-node-0] => (item=manila -> service -> admin) 2026-03-26 03:47:35.865911 | orchestrator | 2026-03-26 03:47:35.865925 | orchestrator | TASK [manila : Ensuring config directories exist] ****************************** 2026-03-26 03:47:35.865940 | orchestrator | Thursday 26 March 2026 03:47:33 +0000 (0:00:04.011) 0:00:36.638 ******** 2026-03-26 03:47:35.865981 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-26 03:47:35.866002 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-26 03:47:35.866086 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-26 03:47:35.866153 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-03-26 03:47:35.866174 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-03-26 03:47:35.866190 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-03-26 03:47:35.866219 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-03-26 03:47:47.295858 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-03-26 03:47:47.295975 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-03-26 03:47:47.296009 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-03-26 03:47:47.296018 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-03-26 03:47:47.296026 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-03-26 03:47:47.296034 | orchestrator | 2026-03-26 03:47:47.296044 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-03-26 03:47:47.296053 | orchestrator | Thursday 26 March 2026 03:47:35 +0000 (0:00:02.411) 0:00:39.049 ******** 2026-03-26 03:47:47.296061 | orchestrator | included: /ansible/roles/manila/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 03:47:47.296069 | orchestrator | 2026-03-26 03:47:47.296076 | orchestrator | TASK [manila : Ensuring manila service ceph config subdir exists] ************** 2026-03-26 03:47:47.296084 | orchestrator | Thursday 26 March 2026 03:47:36 +0000 (0:00:00.633) 0:00:39.683 ******** 2026-03-26 03:47:47.296091 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:47:47.296100 | orchestrator | changed: [testbed-node-1] 2026-03-26 03:47:47.296107 | orchestrator | changed: [testbed-node-2] 2026-03-26 03:47:47.296114 | orchestrator | 2026-03-26 03:47:47.296122 | orchestrator | TASK [manila : Copy over multiple ceph configs for Manila] ********************* 2026-03-26 03:47:47.296129 | orchestrator | Thursday 26 March 2026 03:47:37 +0000 (0:00:01.053) 0:00:40.736 ******** 2026-03-26 03:47:47.296137 | orchestrator | changed: [testbed-node-0] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-03-26 03:47:47.296158 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-03-26 03:47:47.296167 | orchestrator | changed: [testbed-node-2] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-03-26 03:47:47.296181 | orchestrator | changed: [testbed-node-1] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-03-26 03:47:47.296188 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-03-26 03:47:47.296196 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-03-26 03:47:47.296203 | orchestrator | 2026-03-26 03:47:47.296211 | orchestrator | TASK [manila : Copy over ceph Manila keyrings] ********************************* 2026-03-26 03:47:47.296218 | orchestrator | Thursday 26 March 2026 03:47:39 +0000 (0:00:01.877) 0:00:42.613 ******** 2026-03-26 03:47:47.296226 | orchestrator | changed: [testbed-node-0] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-03-26 03:47:47.296233 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-03-26 03:47:47.296240 | orchestrator | changed: [testbed-node-1] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-03-26 03:47:47.296247 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-03-26 03:47:47.296255 | orchestrator | changed: [testbed-node-2] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-03-26 03:47:47.296262 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-03-26 03:47:47.296269 | orchestrator | 2026-03-26 03:47:47.296288 | orchestrator | TASK [manila : Ensuring config directory has correct owner and permission] ***** 2026-03-26 03:47:47.296296 | orchestrator | Thursday 26 March 2026 03:47:40 +0000 (0:00:01.295) 0:00:43.909 ******** 2026-03-26 03:47:47.296304 | orchestrator | ok: [testbed-node-0] => (item=manila-share) 2026-03-26 03:47:47.296312 | orchestrator | ok: [testbed-node-1] => (item=manila-share) 2026-03-26 03:47:47.296319 | orchestrator | ok: [testbed-node-2] => (item=manila-share) 2026-03-26 03:47:47.296326 | orchestrator | 2026-03-26 03:47:47.296334 | orchestrator | TASK [manila : Check if policies shall be overwritten] ************************* 2026-03-26 03:47:47.296343 | orchestrator | Thursday 26 March 2026 03:47:41 +0000 (0:00:00.846) 0:00:44.755 ******** 2026-03-26 03:47:47.296351 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:47:47.296359 | orchestrator | 2026-03-26 03:47:47.296367 | orchestrator | TASK [manila : Set manila policy file] ***************************************** 2026-03-26 03:47:47.296375 | orchestrator | Thursday 26 March 2026 03:47:41 +0000 (0:00:00.155) 0:00:44.910 ******** 2026-03-26 03:47:47.296384 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:47:47.296392 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:47:47.296400 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:47:47.296408 | orchestrator | 2026-03-26 03:47:47.296416 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-03-26 03:47:47.296425 | orchestrator | Thursday 26 March 2026 03:47:42 +0000 (0:00:00.684) 0:00:45.595 ******** 2026-03-26 03:47:47.296433 | orchestrator | included: /ansible/roles/manila/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 03:47:47.296441 | orchestrator | 2026-03-26 03:47:47.296449 | orchestrator | TASK [service-cert-copy : manila | Copying over extra CA certificates] ********* 2026-03-26 03:47:47.296457 | orchestrator | Thursday 26 March 2026 03:47:43 +0000 (0:00:00.715) 0:00:46.310 ******** 2026-03-26 03:47:47.296477 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-26 03:47:48.203986 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-26 03:47:48.204108 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-26 03:47:48.204125 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-03-26 03:47:48.204137 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-03-26 03:47:48.204147 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-03-26 03:47:48.204195 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-03-26 03:47:48.204208 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-03-26 03:47:48.204218 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-03-26 03:47:48.204231 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-03-26 03:47:48.204249 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-03-26 03:47:48.204274 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-03-26 03:47:48.204307 | orchestrator | 2026-03-26 03:47:48.204326 | orchestrator | TASK [service-cert-copy : manila | Copying over backend internal TLS certificate] *** 2026-03-26 03:47:48.204344 | orchestrator | Thursday 26 March 2026 03:47:47 +0000 (0:00:04.172) 0:00:50.483 ******** 2026-03-26 03:47:48.204374 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-26 03:47:48.890313 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-26 03:47:48.890418 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-26 03:47:48.890437 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-26 03:47:48.890451 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:47:48.890465 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-26 03:47:48.890509 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-26 03:47:48.890522 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-26 03:47:48.890552 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-26 03:47:48.890565 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:47:48.890576 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-26 03:47:48.890588 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-26 03:47:48.890607 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-26 03:47:48.890646 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-26 03:47:48.890669 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:47:48.890688 | orchestrator | 2026-03-26 03:47:48.890708 | orchestrator | TASK [service-cert-copy : manila | Copying over backend internal TLS key] ****** 2026-03-26 03:47:48.890728 | orchestrator | Thursday 26 March 2026 03:47:48 +0000 (0:00:00.901) 0:00:51.384 ******** 2026-03-26 03:47:48.890789 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-26 03:47:53.557053 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-26 03:47:53.557130 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-26 03:47:53.557139 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-26 03:47:53.557169 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:47:53.557179 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-26 03:47:53.557186 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-26 03:47:53.557193 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-26 03:47:53.557213 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-26 03:47:53.557221 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:47:53.557227 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-26 03:47:53.557242 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-26 03:47:53.557250 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-26 03:47:53.557256 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-26 03:47:53.557263 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:47:53.557269 | orchestrator | 2026-03-26 03:47:53.557277 | orchestrator | TASK [manila : Copying over config.json files for services] ******************** 2026-03-26 03:47:53.557286 | orchestrator | Thursday 26 March 2026 03:47:49 +0000 (0:00:00.911) 0:00:52.296 ******** 2026-03-26 03:47:53.557310 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-26 03:48:00.875375 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-26 03:48:00.875471 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-26 03:48:00.875481 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-03-26 03:48:00.875488 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-03-26 03:48:00.875494 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-03-26 03:48:00.875521 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-03-26 03:48:00.875530 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-03-26 03:48:00.875542 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-03-26 03:48:00.875548 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-03-26 03:48:00.875553 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-03-26 03:48:00.875558 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-03-26 03:48:00.875564 | orchestrator | 2026-03-26 03:48:00.875571 | orchestrator | TASK [manila : Copying over manila.conf] *************************************** 2026-03-26 03:48:00.875578 | orchestrator | Thursday 26 March 2026 03:47:53 +0000 (0:00:04.662) 0:00:56.958 ******** 2026-03-26 03:48:00.875590 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-26 03:48:05.552857 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-26 03:48:05.552998 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-26 03:48:05.553015 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-03-26 03:48:05.553027 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-26 03:48:05.553053 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-03-26 03:48:05.553080 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-26 03:48:05.553098 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-03-26 03:48:05.553107 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-26 03:48:05.553118 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-03-26 03:48:05.553127 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-03-26 03:48:05.553136 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-03-26 03:48:05.553146 | orchestrator | 2026-03-26 03:48:05.553157 | orchestrator | TASK [manila : Copying over manila-share.conf] ********************************* 2026-03-26 03:48:05.553168 | orchestrator | Thursday 26 March 2026 03:48:00 +0000 (0:00:07.104) 0:01:04.063 ******** 2026-03-26 03:48:05.553179 | orchestrator | changed: [testbed-node-1] => (item=manila-share) 2026-03-26 03:48:05.553221 | orchestrator | changed: [testbed-node-0] => (item=manila-share) 2026-03-26 03:48:05.553239 | orchestrator | changed: [testbed-node-2] => (item=manila-share) 2026-03-26 03:48:05.553255 | orchestrator | 2026-03-26 03:48:05.553270 | orchestrator | TASK [manila : Copying over existing policy file] ****************************** 2026-03-26 03:48:05.553294 | orchestrator | Thursday 26 March 2026 03:48:04 +0000 (0:00:03.914) 0:01:07.977 ******** 2026-03-26 03:48:05.553320 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-26 03:48:09.123211 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-26 03:48:09.239957 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-26 03:48:09.240058 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-26 03:48:09.240083 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:48:09.240099 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-26 03:48:09.240132 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-26 03:48:09.240165 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-26 03:48:09.240204 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-26 03:48:09.240215 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:48:09.240226 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-26 03:48:09.240237 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-26 03:48:09.240247 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-26 03:48:09.240271 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-26 03:48:09.240282 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:48:09.240293 | orchestrator | 2026-03-26 03:48:09.240304 | orchestrator | TASK [manila : Check manila containers] **************************************** 2026-03-26 03:48:09.240316 | orchestrator | Thursday 26 March 2026 03:48:05 +0000 (0:00:00.767) 0:01:08.745 ******** 2026-03-26 03:48:09.240336 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-26 03:48:51.739495 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-26 03:48:51.739608 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-26 03:48:51.739627 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-03-26 03:48:51.739684 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-03-26 03:48:51.739696 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-03-26 03:48:51.739726 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-03-26 03:48:51.739740 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-03-26 03:48:51.739779 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-03-26 03:48:51.739790 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-03-26 03:48:51.739817 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-03-26 03:48:51.739829 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-03-26 03:48:51.739841 | orchestrator | 2026-03-26 03:48:51.739853 | orchestrator | TASK [manila : Creating Manila database] *************************************** 2026-03-26 03:48:51.739865 | orchestrator | Thursday 26 March 2026 03:48:09 +0000 (0:00:03.570) 0:01:12.315 ******** 2026-03-26 03:48:51.739876 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:48:51.739886 | orchestrator | 2026-03-26 03:48:51.739896 | orchestrator | TASK [manila : Creating Manila database user and setting permissions] ********** 2026-03-26 03:48:51.739905 | orchestrator | Thursday 26 March 2026 03:48:11 +0000 (0:00:02.477) 0:01:14.793 ******** 2026-03-26 03:48:51.739914 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:48:51.739923 | orchestrator | 2026-03-26 03:48:51.739933 | orchestrator | TASK [manila : Running Manila bootstrap container] ***************************** 2026-03-26 03:48:51.739943 | orchestrator | Thursday 26 March 2026 03:48:14 +0000 (0:00:02.303) 0:01:17.096 ******** 2026-03-26 03:48:51.739953 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:48:51.739962 | orchestrator | 2026-03-26 03:48:51.739971 | orchestrator | TASK [manila : Flush handlers] ************************************************* 2026-03-26 03:48:51.739981 | orchestrator | Thursday 26 March 2026 03:48:51 +0000 (0:00:37.486) 0:01:54.583 ******** 2026-03-26 03:48:51.739989 | orchestrator | 2026-03-26 03:48:51.740007 | orchestrator | TASK [manila : Flush handlers] ************************************************* 2026-03-26 03:49:47.862090 | orchestrator | Thursday 26 March 2026 03:48:51 +0000 (0:00:00.076) 0:01:54.660 ******** 2026-03-26 03:49:47.862207 | orchestrator | 2026-03-26 03:49:47.862224 | orchestrator | TASK [manila : Flush handlers] ************************************************* 2026-03-26 03:49:47.862237 | orchestrator | Thursday 26 March 2026 03:48:51 +0000 (0:00:00.074) 0:01:54.734 ******** 2026-03-26 03:49:47.862248 | orchestrator | 2026-03-26 03:49:47.862260 | orchestrator | RUNNING HANDLER [manila : Restart manila-api container] ************************ 2026-03-26 03:49:47.862270 | orchestrator | Thursday 26 March 2026 03:48:51 +0000 (0:00:00.077) 0:01:54.811 ******** 2026-03-26 03:49:47.862282 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:49:47.862294 | orchestrator | changed: [testbed-node-1] 2026-03-26 03:49:47.862305 | orchestrator | changed: [testbed-node-2] 2026-03-26 03:49:47.862316 | orchestrator | 2026-03-26 03:49:47.862327 | orchestrator | RUNNING HANDLER [manila : Restart manila-data container] *********************** 2026-03-26 03:49:47.862338 | orchestrator | Thursday 26 March 2026 03:49:07 +0000 (0:00:15.489) 0:02:10.300 ******** 2026-03-26 03:49:47.862348 | orchestrator | changed: [testbed-node-1] 2026-03-26 03:49:47.862374 | orchestrator | changed: [testbed-node-2] 2026-03-26 03:49:47.862386 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:49:47.862397 | orchestrator | 2026-03-26 03:49:47.862408 | orchestrator | RUNNING HANDLER [manila : Restart manila-scheduler container] ****************** 2026-03-26 03:49:47.862449 | orchestrator | Thursday 26 March 2026 03:49:17 +0000 (0:00:10.691) 0:02:20.992 ******** 2026-03-26 03:49:47.862461 | orchestrator | changed: [testbed-node-1] 2026-03-26 03:49:47.862472 | orchestrator | changed: [testbed-node-2] 2026-03-26 03:49:47.862482 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:49:47.862493 | orchestrator | 2026-03-26 03:49:47.862504 | orchestrator | RUNNING HANDLER [manila : Restart manila-share container] ********************** 2026-03-26 03:49:47.862515 | orchestrator | Thursday 26 March 2026 03:49:28 +0000 (0:00:10.408) 0:02:31.400 ******** 2026-03-26 03:49:47.862525 | orchestrator | changed: [testbed-node-1] 2026-03-26 03:49:47.862536 | orchestrator | changed: [testbed-node-2] 2026-03-26 03:49:47.862547 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:49:47.862558 | orchestrator | 2026-03-26 03:49:47.862570 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-26 03:49:47.862583 | orchestrator | testbed-node-0 : ok=28  changed=20  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-26 03:49:47.862597 | orchestrator | testbed-node-1 : ok=19  changed=13  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-26 03:49:47.862609 | orchestrator | testbed-node-2 : ok=19  changed=13  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-26 03:49:47.862621 | orchestrator | 2026-03-26 03:49:47.862634 | orchestrator | 2026-03-26 03:49:47.862646 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-26 03:49:47.862659 | orchestrator | Thursday 26 March 2026 03:49:47 +0000 (0:00:19.047) 0:02:50.448 ******** 2026-03-26 03:49:47.862671 | orchestrator | =============================================================================== 2026-03-26 03:49:47.862684 | orchestrator | manila : Running Manila bootstrap container ---------------------------- 37.49s 2026-03-26 03:49:47.862696 | orchestrator | manila : Restart manila-share container -------------------------------- 19.05s 2026-03-26 03:49:47.862708 | orchestrator | manila : Restart manila-api container ---------------------------------- 15.49s 2026-03-26 03:49:47.862735 | orchestrator | service-ks-register : manila | Creating endpoints ---------------------- 13.26s 2026-03-26 03:49:47.862770 | orchestrator | manila : Restart manila-data container --------------------------------- 10.69s 2026-03-26 03:49:47.862798 | orchestrator | manila : Restart manila-scheduler container ---------------------------- 10.41s 2026-03-26 03:49:47.862811 | orchestrator | manila : Copying over manila.conf --------------------------------------- 7.10s 2026-03-26 03:49:47.862824 | orchestrator | service-ks-register : manila | Creating services ------------------------ 6.59s 2026-03-26 03:49:47.862836 | orchestrator | manila : Copying over config.json files for services -------------------- 4.66s 2026-03-26 03:49:47.862849 | orchestrator | service-cert-copy : manila | Copying over extra CA certificates --------- 4.17s 2026-03-26 03:49:47.862861 | orchestrator | service-ks-register : manila | Granting user roles ---------------------- 4.01s 2026-03-26 03:49:47.862873 | orchestrator | service-ks-register : manila | Creating users --------------------------- 3.98s 2026-03-26 03:49:47.862885 | orchestrator | manila : Copying over manila-share.conf --------------------------------- 3.91s 2026-03-26 03:49:47.862898 | orchestrator | manila : Check manila containers ---------------------------------------- 3.57s 2026-03-26 03:49:47.862911 | orchestrator | service-ks-register : manila | Creating projects ------------------------ 3.42s 2026-03-26 03:49:47.862923 | orchestrator | service-ks-register : manila | Creating roles --------------------------- 3.23s 2026-03-26 03:49:47.862936 | orchestrator | manila : Creating Manila database --------------------------------------- 2.48s 2026-03-26 03:49:47.862949 | orchestrator | manila : Ensuring config directories exist ------------------------------ 2.41s 2026-03-26 03:49:47.862961 | orchestrator | manila : Creating Manila database user and setting permissions ---------- 2.30s 2026-03-26 03:49:47.862972 | orchestrator | manila : Copy over multiple ceph configs for Manila --------------------- 1.88s 2026-03-26 03:49:48.249863 | orchestrator | + sh -c /opt/configuration/scripts/deploy/400-monitoring.sh 2026-03-26 03:50:00.619708 | orchestrator | 2026-03-26 03:50:00 | INFO  | Task 0cae00d9-e756-4c7c-ac2c-a238d5320de9 (netdata) was prepared for execution. 2026-03-26 03:50:00.619873 | orchestrator | 2026-03-26 03:50:00 | INFO  | It takes a moment until task 0cae00d9-e756-4c7c-ac2c-a238d5320de9 (netdata) has been started and output is visible here. 2026-03-26 03:51:39.721063 | orchestrator | 2026-03-26 03:51:39.721159 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-26 03:51:39.721171 | orchestrator | 2026-03-26 03:51:39.721178 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-26 03:51:39.721185 | orchestrator | Thursday 26 March 2026 03:50:05 +0000 (0:00:00.303) 0:00:00.303 ******** 2026-03-26 03:51:39.721193 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2026-03-26 03:51:39.721200 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2026-03-26 03:51:39.721207 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2026-03-26 03:51:39.721213 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2026-03-26 03:51:39.721220 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2026-03-26 03:51:39.721226 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2026-03-26 03:51:39.721232 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2026-03-26 03:51:39.721238 | orchestrator | 2026-03-26 03:51:39.721245 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2026-03-26 03:51:39.721251 | orchestrator | 2026-03-26 03:51:39.721257 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2026-03-26 03:51:39.721264 | orchestrator | Thursday 26 March 2026 03:50:06 +0000 (0:00:01.086) 0:00:01.389 ******** 2026-03-26 03:51:39.721271 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-26 03:51:39.721280 | orchestrator | 2026-03-26 03:51:39.721286 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2026-03-26 03:51:39.721293 | orchestrator | Thursday 26 March 2026 03:50:08 +0000 (0:00:01.594) 0:00:02.984 ******** 2026-03-26 03:51:39.721299 | orchestrator | ok: [testbed-node-1] 2026-03-26 03:51:39.721306 | orchestrator | ok: [testbed-node-0] 2026-03-26 03:51:39.721313 | orchestrator | ok: [testbed-manager] 2026-03-26 03:51:39.721320 | orchestrator | ok: [testbed-node-2] 2026-03-26 03:51:39.721326 | orchestrator | ok: [testbed-node-3] 2026-03-26 03:51:39.721332 | orchestrator | ok: [testbed-node-4] 2026-03-26 03:51:39.721339 | orchestrator | ok: [testbed-node-5] 2026-03-26 03:51:39.721345 | orchestrator | 2026-03-26 03:51:39.721351 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2026-03-26 03:51:39.721358 | orchestrator | Thursday 26 March 2026 03:50:10 +0000 (0:00:02.156) 0:00:05.141 ******** 2026-03-26 03:51:39.721364 | orchestrator | ok: [testbed-node-0] 2026-03-26 03:51:39.721370 | orchestrator | ok: [testbed-node-1] 2026-03-26 03:51:39.721377 | orchestrator | ok: [testbed-node-2] 2026-03-26 03:51:39.721383 | orchestrator | ok: [testbed-node-3] 2026-03-26 03:51:39.721389 | orchestrator | ok: [testbed-node-4] 2026-03-26 03:51:39.721395 | orchestrator | ok: [testbed-node-5] 2026-03-26 03:51:39.721401 | orchestrator | ok: [testbed-manager] 2026-03-26 03:51:39.721408 | orchestrator | 2026-03-26 03:51:39.721414 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2026-03-26 03:51:39.721421 | orchestrator | Thursday 26 March 2026 03:50:12 +0000 (0:00:02.351) 0:00:07.492 ******** 2026-03-26 03:51:39.721427 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:51:39.721433 | orchestrator | changed: [testbed-manager] 2026-03-26 03:51:39.721440 | orchestrator | changed: [testbed-node-1] 2026-03-26 03:51:39.721446 | orchestrator | changed: [testbed-node-2] 2026-03-26 03:51:39.721452 | orchestrator | changed: [testbed-node-3] 2026-03-26 03:51:39.721478 | orchestrator | changed: [testbed-node-4] 2026-03-26 03:51:39.721484 | orchestrator | changed: [testbed-node-5] 2026-03-26 03:51:39.721490 | orchestrator | 2026-03-26 03:51:39.721497 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2026-03-26 03:51:39.721514 | orchestrator | Thursday 26 March 2026 03:50:14 +0000 (0:00:01.655) 0:00:09.147 ******** 2026-03-26 03:51:39.721521 | orchestrator | changed: [testbed-manager] 2026-03-26 03:51:39.721527 | orchestrator | changed: [testbed-node-4] 2026-03-26 03:51:39.721533 | orchestrator | changed: [testbed-node-3] 2026-03-26 03:51:39.721540 | orchestrator | changed: [testbed-node-5] 2026-03-26 03:51:39.721546 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:51:39.721552 | orchestrator | changed: [testbed-node-1] 2026-03-26 03:51:39.721558 | orchestrator | changed: [testbed-node-2] 2026-03-26 03:51:39.721564 | orchestrator | 2026-03-26 03:51:39.721572 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2026-03-26 03:51:39.721579 | orchestrator | Thursday 26 March 2026 03:50:30 +0000 (0:00:16.080) 0:00:25.228 ******** 2026-03-26 03:51:39.721586 | orchestrator | changed: [testbed-node-4] 2026-03-26 03:51:39.721593 | orchestrator | changed: [testbed-node-5] 2026-03-26 03:51:39.721600 | orchestrator | changed: [testbed-manager] 2026-03-26 03:51:39.721607 | orchestrator | changed: [testbed-node-3] 2026-03-26 03:51:39.721614 | orchestrator | changed: [testbed-node-1] 2026-03-26 03:51:39.721621 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:51:39.721628 | orchestrator | changed: [testbed-node-2] 2026-03-26 03:51:39.721636 | orchestrator | 2026-03-26 03:51:39.721643 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2026-03-26 03:51:39.721651 | orchestrator | Thursday 26 March 2026 03:51:12 +0000 (0:00:42.183) 0:01:07.411 ******** 2026-03-26 03:51:39.721659 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-26 03:51:39.721672 | orchestrator | 2026-03-26 03:51:39.721683 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2026-03-26 03:51:39.721694 | orchestrator | Thursday 26 March 2026 03:51:14 +0000 (0:00:01.653) 0:01:09.064 ******** 2026-03-26 03:51:39.721707 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2026-03-26 03:51:39.721718 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2026-03-26 03:51:39.721728 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2026-03-26 03:51:39.721757 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2026-03-26 03:51:39.721787 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2026-03-26 03:51:39.721797 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2026-03-26 03:51:39.721807 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2026-03-26 03:51:39.721817 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2026-03-26 03:51:39.721828 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2026-03-26 03:51:39.721838 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2026-03-26 03:51:39.721849 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2026-03-26 03:51:39.721859 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2026-03-26 03:51:39.721869 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2026-03-26 03:51:39.721875 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2026-03-26 03:51:39.721881 | orchestrator | 2026-03-26 03:51:39.721888 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2026-03-26 03:51:39.721895 | orchestrator | Thursday 26 March 2026 03:51:17 +0000 (0:00:03.435) 0:01:12.500 ******** 2026-03-26 03:51:39.721901 | orchestrator | ok: [testbed-manager] 2026-03-26 03:51:39.721908 | orchestrator | ok: [testbed-node-0] 2026-03-26 03:51:39.721914 | orchestrator | ok: [testbed-node-1] 2026-03-26 03:51:39.721920 | orchestrator | ok: [testbed-node-2] 2026-03-26 03:51:39.721935 | orchestrator | ok: [testbed-node-3] 2026-03-26 03:51:39.721941 | orchestrator | ok: [testbed-node-4] 2026-03-26 03:51:39.721948 | orchestrator | ok: [testbed-node-5] 2026-03-26 03:51:39.721954 | orchestrator | 2026-03-26 03:51:39.721960 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2026-03-26 03:51:39.721967 | orchestrator | Thursday 26 March 2026 03:51:19 +0000 (0:00:01.312) 0:01:13.812 ******** 2026-03-26 03:51:39.721973 | orchestrator | changed: [testbed-manager] 2026-03-26 03:51:39.721979 | orchestrator | changed: [testbed-node-1] 2026-03-26 03:51:39.721985 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:51:39.721992 | orchestrator | changed: [testbed-node-2] 2026-03-26 03:51:39.721998 | orchestrator | changed: [testbed-node-3] 2026-03-26 03:51:39.722004 | orchestrator | changed: [testbed-node-4] 2026-03-26 03:51:39.722010 | orchestrator | changed: [testbed-node-5] 2026-03-26 03:51:39.722059 | orchestrator | 2026-03-26 03:51:39.722065 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2026-03-26 03:51:39.722072 | orchestrator | Thursday 26 March 2026 03:51:20 +0000 (0:00:01.334) 0:01:15.147 ******** 2026-03-26 03:51:39.722078 | orchestrator | ok: [testbed-node-0] 2026-03-26 03:51:39.722084 | orchestrator | ok: [testbed-manager] 2026-03-26 03:51:39.722091 | orchestrator | ok: [testbed-node-1] 2026-03-26 03:51:39.722097 | orchestrator | ok: [testbed-node-2] 2026-03-26 03:51:39.722103 | orchestrator | ok: [testbed-node-3] 2026-03-26 03:51:39.722109 | orchestrator | ok: [testbed-node-4] 2026-03-26 03:51:39.722116 | orchestrator | ok: [testbed-node-5] 2026-03-26 03:51:39.722122 | orchestrator | 2026-03-26 03:51:39.722128 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2026-03-26 03:51:39.722135 | orchestrator | Thursday 26 March 2026 03:51:21 +0000 (0:00:01.269) 0:01:16.417 ******** 2026-03-26 03:51:39.722141 | orchestrator | ok: [testbed-manager] 2026-03-26 03:51:39.722147 | orchestrator | ok: [testbed-node-1] 2026-03-26 03:51:39.722157 | orchestrator | ok: [testbed-node-2] 2026-03-26 03:51:39.722168 | orchestrator | ok: [testbed-node-0] 2026-03-26 03:51:39.722178 | orchestrator | ok: [testbed-node-3] 2026-03-26 03:51:39.722189 | orchestrator | ok: [testbed-node-4] 2026-03-26 03:51:39.722201 | orchestrator | ok: [testbed-node-5] 2026-03-26 03:51:39.722212 | orchestrator | 2026-03-26 03:51:39.722223 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2026-03-26 03:51:39.722232 | orchestrator | Thursday 26 March 2026 03:51:23 +0000 (0:00:01.788) 0:01:18.205 ******** 2026-03-26 03:51:39.722243 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2026-03-26 03:51:39.722263 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-26 03:51:39.722275 | orchestrator | 2026-03-26 03:51:39.722286 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2026-03-26 03:51:39.722298 | orchestrator | Thursday 26 March 2026 03:51:25 +0000 (0:00:01.523) 0:01:19.729 ******** 2026-03-26 03:51:39.722310 | orchestrator | changed: [testbed-manager] 2026-03-26 03:51:39.722321 | orchestrator | 2026-03-26 03:51:39.722331 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2026-03-26 03:51:39.722342 | orchestrator | Thursday 26 March 2026 03:51:28 +0000 (0:00:03.358) 0:01:23.087 ******** 2026-03-26 03:51:39.722354 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:51:39.722365 | orchestrator | changed: [testbed-node-1] 2026-03-26 03:51:39.722377 | orchestrator | changed: [testbed-node-3] 2026-03-26 03:51:39.722389 | orchestrator | changed: [testbed-node-4] 2026-03-26 03:51:39.722400 | orchestrator | changed: [testbed-node-2] 2026-03-26 03:51:39.722412 | orchestrator | changed: [testbed-node-5] 2026-03-26 03:51:39.722423 | orchestrator | changed: [testbed-manager] 2026-03-26 03:51:39.722434 | orchestrator | 2026-03-26 03:51:39.722445 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-26 03:51:39.722466 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-26 03:51:39.722479 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-26 03:51:39.722490 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-26 03:51:39.722502 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-26 03:51:39.722522 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-26 03:51:40.264427 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-26 03:51:40.264525 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-26 03:51:40.264543 | orchestrator | 2026-03-26 03:51:40.264557 | orchestrator | 2026-03-26 03:51:40.264570 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-26 03:51:40.264586 | orchestrator | Thursday 26 March 2026 03:51:39 +0000 (0:00:11.180) 0:01:34.268 ******** 2026-03-26 03:51:40.264598 | orchestrator | =============================================================================== 2026-03-26 03:51:40.264611 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 42.18s 2026-03-26 03:51:40.264623 | orchestrator | osism.services.netdata : Add repository -------------------------------- 16.08s 2026-03-26 03:51:40.264652 | orchestrator | osism.services.netdata : Restart service netdata ----------------------- 11.18s 2026-03-26 03:51:40.264675 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 3.44s 2026-03-26 03:51:40.264689 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 3.36s 2026-03-26 03:51:40.264697 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 2.35s 2026-03-26 03:51:40.264705 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 2.16s 2026-03-26 03:51:40.264713 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 1.79s 2026-03-26 03:51:40.264721 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 1.66s 2026-03-26 03:51:40.264729 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.65s 2026-03-26 03:51:40.264753 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.59s 2026-03-26 03:51:40.264761 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.52s 2026-03-26 03:51:40.264769 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.33s 2026-03-26 03:51:40.264776 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.31s 2026-03-26 03:51:40.264784 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.27s 2026-03-26 03:51:40.264792 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.09s 2026-03-26 03:51:46.363395 | orchestrator | 2026-03-26 03:51:46 | INFO  | Task 841b3c3f-80c9-4ffe-85c0-5ab312ce4bb3 (prometheus) was prepared for execution. 2026-03-26 03:51:46.363531 | orchestrator | 2026-03-26 03:51:46 | INFO  | It takes a moment until task 841b3c3f-80c9-4ffe-85c0-5ab312ce4bb3 (prometheus) has been started and output is visible here. 2026-03-26 03:51:56.422150 | orchestrator | 2026-03-26 03:51:56.422287 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-26 03:51:56.422311 | orchestrator | 2026-03-26 03:51:56.422328 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-26 03:51:56.422379 | orchestrator | Thursday 26 March 2026 03:51:50 +0000 (0:00:00.311) 0:00:00.311 ******** 2026-03-26 03:51:56.422398 | orchestrator | ok: [testbed-manager] 2026-03-26 03:51:56.422419 | orchestrator | ok: [testbed-node-0] 2026-03-26 03:51:56.422454 | orchestrator | ok: [testbed-node-1] 2026-03-26 03:51:56.422472 | orchestrator | ok: [testbed-node-2] 2026-03-26 03:51:56.422490 | orchestrator | ok: [testbed-node-3] 2026-03-26 03:51:56.422508 | orchestrator | ok: [testbed-node-4] 2026-03-26 03:51:56.422529 | orchestrator | ok: [testbed-node-5] 2026-03-26 03:51:56.422549 | orchestrator | 2026-03-26 03:51:56.422568 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-26 03:51:56.422587 | orchestrator | Thursday 26 March 2026 03:51:52 +0000 (0:00:01.016) 0:00:01.328 ******** 2026-03-26 03:51:56.422610 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2026-03-26 03:51:56.422631 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2026-03-26 03:51:56.422653 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2026-03-26 03:51:56.422674 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2026-03-26 03:51:56.422697 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2026-03-26 03:51:56.422718 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2026-03-26 03:51:56.422772 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2026-03-26 03:51:56.422792 | orchestrator | 2026-03-26 03:51:56.422814 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2026-03-26 03:51:56.422837 | orchestrator | 2026-03-26 03:51:56.422857 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-03-26 03:51:56.422876 | orchestrator | Thursday 26 March 2026 03:51:52 +0000 (0:00:00.954) 0:00:02.282 ******** 2026-03-26 03:51:56.422898 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-26 03:51:56.422920 | orchestrator | 2026-03-26 03:51:56.422939 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2026-03-26 03:51:56.422960 | orchestrator | Thursday 26 March 2026 03:51:54 +0000 (0:00:01.436) 0:00:03.718 ******** 2026-03-26 03:51:56.422986 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-26 03:51:56.423013 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-26 03:51:56.423035 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-26 03:51:56.423075 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 03:51:56.423134 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-26 03:51:56.423157 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-26 03:51:56.423174 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 03:51:56.423192 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-26 03:51:56.423211 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-26 03:51:56.423231 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 03:51:56.423251 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-26 03:51:56.423296 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 03:51:57.443064 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 03:51:57.443167 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-26 03:51:57.443186 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-26 03:51:57.443199 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-26 03:51:57.443210 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-26 03:51:57.443224 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-26 03:51:57.443334 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 03:51:57.443358 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-26 03:51:57.443371 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-26 03:51:57.443382 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-26 03:51:57.443394 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-26 03:51:57.443406 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 03:51:57.443425 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 03:51:57.443437 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-26 03:51:57.443472 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 03:52:02.782857 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-26 03:52:02.782963 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 03:52:02.782977 | orchestrator | 2026-03-26 03:52:02.782987 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-03-26 03:52:02.782996 | orchestrator | Thursday 26 March 2026 03:51:57 +0000 (0:00:03.021) 0:00:06.740 ******** 2026-03-26 03:52:02.783005 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-26 03:52:02.783015 | orchestrator | 2026-03-26 03:52:02.783022 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2026-03-26 03:52:02.783030 | orchestrator | Thursday 26 March 2026 03:51:59 +0000 (0:00:01.776) 0:00:08.516 ******** 2026-03-26 03:52:02.783038 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-26 03:52:02.783066 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-26 03:52:02.783074 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-26 03:52:02.783082 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-26 03:52:02.783117 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-26 03:52:02.783126 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-26 03:52:02.783134 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-26 03:52:02.783141 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 03:52:02.783156 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 03:52:02.783164 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-26 03:52:02.783172 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 03:52:02.783180 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-26 03:52:02.783199 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-26 03:52:04.827161 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-26 03:52:04.827256 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-26 03:52:04.827288 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 03:52:04.827296 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 03:52:04.827303 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 03:52:04.827311 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-26 03:52:04.827350 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-26 03:52:04.827361 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-26 03:52:04.827368 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-26 03:52:04.827383 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-26 03:52:04.827390 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-26 03:52:04.827398 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-26 03:52:04.827405 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 03:52:04.827411 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 03:52:04.827425 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 03:52:06.392901 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 03:52:06.393019 | orchestrator | 2026-03-26 03:52:06.393035 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2026-03-26 03:52:06.393047 | orchestrator | Thursday 26 March 2026 03:52:04 +0000 (0:00:05.614) 0:00:14.130 ******** 2026-03-26 03:52:06.393061 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-26 03:52:06.393073 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-26 03:52:06.393084 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-26 03:52:06.393146 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-26 03:52:06.393178 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 03:52:06.393190 | orchestrator | skipping: [testbed-manager] 2026-03-26 03:52:06.393202 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-26 03:52:06.393222 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 03:52:06.393232 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 03:52:06.393243 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-26 03:52:06.393253 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 03:52:06.393264 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-26 03:52:06.393279 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 03:52:06.393296 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 03:52:07.047562 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-26 03:52:07.047667 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 03:52:07.047684 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:52:07.047697 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:52:07.047708 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-26 03:52:07.047720 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-26 03:52:07.047818 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-26 03:52:07.047849 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-26 03:52:07.047862 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 03:52:07.047903 | orchestrator | skipping: [testbed-node-3] 2026-03-26 03:52:07.047933 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 03:52:07.047945 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-26 03:52:07.047955 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 03:52:07.047965 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:52:07.047975 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-26 03:52:07.047985 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-26 03:52:07.047996 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-26 03:52:07.048006 | orchestrator | skipping: [testbed-node-4] 2026-03-26 03:52:07.048021 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-26 03:52:07.048045 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-26 03:52:08.112414 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-26 03:52:08.112519 | orchestrator | skipping: [testbed-node-5] 2026-03-26 03:52:08.112535 | orchestrator | 2026-03-26 03:52:08.112547 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2026-03-26 03:52:08.112559 | orchestrator | Thursday 26 March 2026 03:52:06 +0000 (0:00:02.178) 0:00:16.309 ******** 2026-03-26 03:52:08.112571 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-26 03:52:08.112584 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-26 03:52:08.112595 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-26 03:52:08.112625 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-26 03:52:08.112673 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 03:52:08.112686 | orchestrator | skipping: [testbed-manager] 2026-03-26 03:52:08.112696 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-26 03:52:08.112707 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 03:52:08.112717 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 03:52:08.112729 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-26 03:52:08.112857 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 03:52:08.112875 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-26 03:52:08.112895 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 03:52:08.112914 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 03:52:09.374313 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-26 03:52:09.374395 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 03:52:09.374405 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:52:09.374413 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-26 03:52:09.374420 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 03:52:09.374426 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 03:52:09.374461 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-26 03:52:09.374467 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 03:52:09.374472 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:52:09.374478 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:52:09.374496 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-26 03:52:09.374502 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-26 03:52:09.374507 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-26 03:52:09.374513 | orchestrator | skipping: [testbed-node-3] 2026-03-26 03:52:09.374518 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-26 03:52:09.374523 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-26 03:52:09.374536 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-26 03:52:09.374542 | orchestrator | skipping: [testbed-node-4] 2026-03-26 03:52:09.374547 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-26 03:52:09.374557 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-26 03:52:13.376225 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-26 03:52:13.376368 | orchestrator | skipping: [testbed-node-5] 2026-03-26 03:52:13.376398 | orchestrator | 2026-03-26 03:52:13.376419 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2026-03-26 03:52:13.376441 | orchestrator | Thursday 26 March 2026 03:52:09 +0000 (0:00:02.365) 0:00:18.675 ******** 2026-03-26 03:52:13.376462 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-26 03:52:13.376484 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-26 03:52:13.376533 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-26 03:52:13.376561 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-26 03:52:13.376574 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-26 03:52:13.376606 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-26 03:52:13.376618 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-26 03:52:13.376630 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-26 03:52:13.376641 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 03:52:13.376661 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 03:52:13.376673 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-26 03:52:13.376692 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-26 03:52:13.376704 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 03:52:13.376724 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-26 03:52:16.303283 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-26 03:52:16.303412 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 03:52:16.303456 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 03:52:16.303469 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-26 03:52:16.303495 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-26 03:52:16.303506 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 03:52:16.303517 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-26 03:52:16.303548 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-26 03:52:16.303562 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-26 03:52:16.303584 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-26 03:52:16.303594 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-26 03:52:16.303610 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 03:52:16.303621 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 03:52:16.303632 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 03:52:16.303651 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 03:52:20.138336 | orchestrator | 2026-03-26 03:52:20.138435 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2026-03-26 03:52:20.138449 | orchestrator | Thursday 26 March 2026 03:52:16 +0000 (0:00:06.928) 0:00:25.603 ******** 2026-03-26 03:52:20.138459 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-26 03:52:20.138470 | orchestrator | 2026-03-26 03:52:20.138479 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2026-03-26 03:52:20.138506 | orchestrator | Thursday 26 March 2026 03:52:17 +0000 (0:00:00.970) 0:00:26.573 ******** 2026-03-26 03:52:20.138518 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1100875, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.8383117, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-26 03:52:20.138531 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1100875, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.8383117, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-26 03:52:20.138540 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1100875, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.8383117, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-26 03:52:20.138564 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1100918, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.8500881, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-26 03:52:20.138575 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1100875, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.8383117, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-26 03:52:20.138584 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1100918, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.8500881, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-26 03:52:20.138612 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1100875, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.8383117, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-26 03:52:20.138629 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1100875, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.8383117, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-26 03:52:20.138639 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1100875, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.8383117, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-26 03:52:20.138648 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1100918, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.8500881, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-26 03:52:20.138662 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1100918, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.8500881, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-26 03:52:20.138672 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1100857, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.8349571, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-26 03:52:20.138681 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1100857, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.8349571, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-26 03:52:20.138697 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1100918, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.8500881, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-26 03:52:21.972468 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1100918, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.8500881, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-26 03:52:21.972601 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1100857, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.8349571, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-26 03:52:21.972629 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1100857, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.8349571, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-26 03:52:21.972672 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1100857, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.8349571, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-26 03:52:21.972693 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1100901, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.8452802, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-26 03:52:21.972714 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1100901, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.8452802, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-26 03:52:21.972841 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1100901, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.8452802, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-26 03:52:21.972888 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1100857, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.8349571, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-26 03:52:21.972910 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1100901, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.8452802, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-26 03:52:21.972930 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1100901, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.8452802, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-26 03:52:21.972960 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1100918, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.8500881, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-26 03:52:21.972981 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1100854, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.8317897, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-26 03:52:21.973000 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1100901, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.8452802, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-26 03:52:21.973031 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1100854, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.8317897, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-26 03:52:21.973066 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1100854, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.8317897, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-26 03:52:23.508418 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1100854, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.8317897, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-26 03:52:23.508507 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1100854, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.8317897, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-26 03:52:23.508528 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1100881, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.8386195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-26 03:52:23.508535 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1100854, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.8317897, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-26 03:52:23.508542 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1100881, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.8386195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-26 03:52:23.508573 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1100881, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.8386195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-26 03:52:23.508610 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1100881, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.8386195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-26 03:52:23.508630 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1100896, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.8417988, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-26 03:52:23.508637 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1100881, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.8386195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-26 03:52:23.508648 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1100896, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.8417988, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-26 03:52:23.508654 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1100896, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.8417988, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-26 03:52:23.508666 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1100857, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.8349571, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-26 03:52:23.508673 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1100881, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.8386195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-26 03:52:23.508680 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1100884, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.8392801, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-26 03:52:23.508690 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1100896, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.8417988, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-26 03:52:24.954094 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1100896, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.8417988, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-26 03:52:24.954217 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1100884, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.8392801, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-26 03:52:24.954247 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1100896, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.8417988, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-26 03:52:24.954293 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1100884, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.8392801, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-26 03:52:24.954315 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1100864, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.83728, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-26 03:52:24.954335 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1100884, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.8392801, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-26 03:52:24.954355 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1100884, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.8392801, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-26 03:52:24.954396 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1100864, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.83728, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-26 03:52:24.954425 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1100864, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.83728, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-26 03:52:24.954446 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1100901, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.8452802, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-26 03:52:24.954475 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1100864, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.83728, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-26 03:52:24.954495 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1100884, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.8392801, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-26 03:52:24.954514 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1100864, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.83728, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-26 03:52:24.954533 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1100917, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.8482802, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-26 03:52:24.954564 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1100917, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.8482802, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-26 03:52:26.512670 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1100917, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.8482802, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-26 03:52:26.512813 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1100864, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.83728, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-26 03:52:26.512852 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1100917, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.8482802, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-26 03:52:26.512866 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1100917, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.8482802, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-26 03:52:26.512877 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1100852, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.83128, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-26 03:52:26.512889 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1100852, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.83128, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-26 03:52:26.512901 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1100917, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.8482802, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-26 03:52:26.512939 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1100852, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.83128, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-26 03:52:26.512960 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1100852, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.83128, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-26 03:52:26.512972 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1100854, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.8317897, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-26 03:52:26.512984 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1100852, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.83128, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-26 03:52:26.512996 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1100931, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.8532803, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-26 03:52:26.513007 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1100931, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.8532803, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-26 03:52:26.513019 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1100931, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.8532803, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-26 03:52:26.513044 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1100852, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.83128, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-26 03:52:27.867316 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1100931, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.8532803, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-26 03:52:27.867388 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1100931, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.8532803, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-26 03:52:27.867401 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1100915, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.8482802, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-26 03:52:27.867410 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1100931, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.8532803, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-26 03:52:27.867419 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1100915, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.8482802, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-26 03:52:27.867428 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1100915, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.8482802, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-26 03:52:27.867453 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1100915, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.8482802, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-26 03:52:27.867489 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1100915, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.8482802, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-26 03:52:27.867498 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1100855, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.83228, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-26 03:52:27.867507 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1100855, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.83228, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-26 03:52:27.867515 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1100915, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.8482802, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-26 03:52:27.867524 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1100855, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.83228, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-26 03:52:27.867532 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1100855, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.83228, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-26 03:52:27.867551 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1100881, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.8386195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-26 03:52:27.867565 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1100853, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.83128, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-26 03:52:29.318448 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1100855, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.83228, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-26 03:52:29.318531 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1100855, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.83228, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-26 03:52:29.318544 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1100853, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.83128, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-26 03:52:29.318555 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1100853, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.83128, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-26 03:52:29.318566 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1100853, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.83128, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-26 03:52:29.318605 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1100853, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.83128, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-26 03:52:29.318623 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1100893, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.8408408, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-26 03:52:29.318646 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1100893, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.8408408, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-26 03:52:29.318657 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1100853, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.83128, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-26 03:52:29.318667 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1100893, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.8408408, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-26 03:52:29.318677 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1100887, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.84028, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-26 03:52:29.318689 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1100893, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.8408408, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-26 03:52:29.318708 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1100893, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.8408408, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-26 03:52:29.318726 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1100887, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.84028, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-26 03:52:29.318791 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1100893, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.8408408, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-26 03:52:35.294576 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1100896, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.8417988, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-26 03:52:35.294695 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1100887, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.84028, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-26 03:52:35.294711 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1100887, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.84028, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-26 03:52:35.294723 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1100887, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.84028, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-26 03:52:35.294833 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1100928, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.8532803, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-26 03:52:35.294853 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:52:35.294884 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1100887, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.84028, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-26 03:52:35.294897 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1100928, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.8532803, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-26 03:52:35.294926 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:52:35.294938 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1100928, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.8532803, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-26 03:52:35.294949 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:52:35.294961 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1100928, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.8532803, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-26 03:52:35.294973 | orchestrator | skipping: [testbed-node-3] 2026-03-26 03:52:35.294984 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1100928, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.8532803, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-26 03:52:35.295005 | orchestrator | skipping: [testbed-node-4] 2026-03-26 03:52:35.295016 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1100928, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.8532803, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-26 03:52:35.295027 | orchestrator | skipping: [testbed-node-5] 2026-03-26 03:52:35.295044 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1100884, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.8392801, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-26 03:52:35.295056 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1100864, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.83728, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-26 03:52:35.295076 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1100917, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.8482802, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-26 03:53:03.670550 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1100852, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.83128, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-26 03:53:03.670666 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1100931, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.8532803, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-26 03:53:03.670703 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1100915, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.8482802, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-26 03:53:03.670716 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1100855, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.83228, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-26 03:53:03.670790 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1100853, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.83128, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-26 03:53:03.670804 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1100893, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.8408408, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-26 03:53:03.670814 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1100887, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.84028, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-26 03:53:03.670842 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1100928, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.8532803, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-26 03:53:03.670854 | orchestrator | 2026-03-26 03:53:03.670866 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2026-03-26 03:53:03.670878 | orchestrator | Thursday 26 March 2026 03:52:43 +0000 (0:00:25.818) 0:00:52.392 ******** 2026-03-26 03:53:03.670889 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-26 03:53:03.670900 | orchestrator | 2026-03-26 03:53:03.670910 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2026-03-26 03:53:03.670930 | orchestrator | Thursday 26 March 2026 03:52:43 +0000 (0:00:00.819) 0:00:53.211 ******** 2026-03-26 03:53:03.670940 | orchestrator | [WARNING]: Skipped 2026-03-26 03:53:03.670951 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-26 03:53:03.670961 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2026-03-26 03:53:03.670971 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-26 03:53:03.670981 | orchestrator | node-0/prometheus.yml.d' is not a directory 2026-03-26 03:53:03.670990 | orchestrator | [WARNING]: Skipped 2026-03-26 03:53:03.671000 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-26 03:53:03.671010 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2026-03-26 03:53:03.671019 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-26 03:53:03.671029 | orchestrator | manager/prometheus.yml.d' is not a directory 2026-03-26 03:53:03.671038 | orchestrator | [WARNING]: Skipped 2026-03-26 03:53:03.671048 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-26 03:53:03.671058 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2026-03-26 03:53:03.671067 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-26 03:53:03.671077 | orchestrator | node-1/prometheus.yml.d' is not a directory 2026-03-26 03:53:03.671088 | orchestrator | [WARNING]: Skipped 2026-03-26 03:53:03.671100 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-26 03:53:03.671117 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2026-03-26 03:53:03.671134 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-26 03:53:03.671171 | orchestrator | node-3/prometheus.yml.d' is not a directory 2026-03-26 03:53:03.671189 | orchestrator | [WARNING]: Skipped 2026-03-26 03:53:03.671205 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-26 03:53:03.671218 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2026-03-26 03:53:03.671234 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-26 03:53:03.671250 | orchestrator | node-2/prometheus.yml.d' is not a directory 2026-03-26 03:53:03.671266 | orchestrator | [WARNING]: Skipped 2026-03-26 03:53:03.671284 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-26 03:53:03.671303 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2026-03-26 03:53:03.671319 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-26 03:53:03.671345 | orchestrator | node-4/prometheus.yml.d' is not a directory 2026-03-26 03:53:03.671363 | orchestrator | [WARNING]: Skipped 2026-03-26 03:53:03.671380 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-26 03:53:03.671396 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2026-03-26 03:53:03.671412 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-26 03:53:03.671428 | orchestrator | node-5/prometheus.yml.d' is not a directory 2026-03-26 03:53:03.671444 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-26 03:53:03.671460 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-26 03:53:03.671476 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-26 03:53:03.671493 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-26 03:53:03.671509 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-26 03:53:03.671526 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-26 03:53:03.671543 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-26 03:53:03.671559 | orchestrator | 2026-03-26 03:53:03.671573 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2026-03-26 03:53:03.671583 | orchestrator | Thursday 26 March 2026 03:52:45 +0000 (0:00:01.919) 0:00:55.131 ******** 2026-03-26 03:53:03.671619 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-26 03:53:03.671642 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-26 03:53:03.671652 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:53:03.671662 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:53:03.671672 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-26 03:53:03.671682 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:53:03.671703 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-26 03:53:21.823392 | orchestrator | skipping: [testbed-node-3] 2026-03-26 03:53:21.823498 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-26 03:53:21.823515 | orchestrator | skipping: [testbed-node-4] 2026-03-26 03:53:21.823526 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-26 03:53:21.823536 | orchestrator | skipping: [testbed-node-5] 2026-03-26 03:53:21.823546 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2026-03-26 03:53:21.823557 | orchestrator | 2026-03-26 03:53:21.823567 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2026-03-26 03:53:21.823578 | orchestrator | Thursday 26 March 2026 03:53:03 +0000 (0:00:17.839) 0:01:12.971 ******** 2026-03-26 03:53:21.823588 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-26 03:53:21.823598 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-26 03:53:21.823608 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:53:21.823618 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:53:21.823629 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-26 03:53:21.823638 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:53:21.823648 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-26 03:53:21.823662 | orchestrator | skipping: [testbed-node-3] 2026-03-26 03:53:21.823678 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-26 03:53:21.823694 | orchestrator | skipping: [testbed-node-4] 2026-03-26 03:53:21.823711 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-26 03:53:21.823727 | orchestrator | skipping: [testbed-node-5] 2026-03-26 03:53:21.823833 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2026-03-26 03:53:21.823850 | orchestrator | 2026-03-26 03:53:21.823861 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2026-03-26 03:53:21.823871 | orchestrator | Thursday 26 March 2026 03:53:06 +0000 (0:00:02.942) 0:01:15.913 ******** 2026-03-26 03:53:21.823882 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-26 03:53:21.823893 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-26 03:53:21.823904 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:53:21.823917 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:53:21.823928 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-26 03:53:21.823940 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:53:21.824015 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-26 03:53:21.824048 | orchestrator | skipping: [testbed-node-3] 2026-03-26 03:53:21.824060 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-26 03:53:21.824071 | orchestrator | skipping: [testbed-node-4] 2026-03-26 03:53:21.824082 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-26 03:53:21.824107 | orchestrator | skipping: [testbed-node-5] 2026-03-26 03:53:21.824119 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2026-03-26 03:53:21.824130 | orchestrator | 2026-03-26 03:53:21.824141 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2026-03-26 03:53:21.824152 | orchestrator | Thursday 26 March 2026 03:53:08 +0000 (0:00:02.029) 0:01:17.943 ******** 2026-03-26 03:53:21.824164 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-26 03:53:21.824175 | orchestrator | 2026-03-26 03:53:21.824186 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2026-03-26 03:53:21.824197 | orchestrator | Thursday 26 March 2026 03:53:09 +0000 (0:00:00.839) 0:01:18.782 ******** 2026-03-26 03:53:21.824209 | orchestrator | skipping: [testbed-manager] 2026-03-26 03:53:21.824221 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:53:21.824232 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:53:21.824243 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:53:21.824254 | orchestrator | skipping: [testbed-node-3] 2026-03-26 03:53:21.824267 | orchestrator | skipping: [testbed-node-4] 2026-03-26 03:53:21.824284 | orchestrator | skipping: [testbed-node-5] 2026-03-26 03:53:21.824301 | orchestrator | 2026-03-26 03:53:21.824317 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2026-03-26 03:53:21.824335 | orchestrator | Thursday 26 March 2026 03:53:10 +0000 (0:00:00.868) 0:01:19.650 ******** 2026-03-26 03:53:21.824352 | orchestrator | skipping: [testbed-manager] 2026-03-26 03:53:21.824368 | orchestrator | skipping: [testbed-node-3] 2026-03-26 03:53:21.824384 | orchestrator | skipping: [testbed-node-4] 2026-03-26 03:53:21.824402 | orchestrator | skipping: [testbed-node-5] 2026-03-26 03:53:21.824419 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:53:21.824437 | orchestrator | changed: [testbed-node-1] 2026-03-26 03:53:21.824453 | orchestrator | changed: [testbed-node-2] 2026-03-26 03:53:21.824470 | orchestrator | 2026-03-26 03:53:21.824482 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2026-03-26 03:53:21.824511 | orchestrator | Thursday 26 March 2026 03:53:12 +0000 (0:00:02.389) 0:01:22.040 ******** 2026-03-26 03:53:21.824523 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-26 03:53:21.824535 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-26 03:53:21.824546 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-26 03:53:21.824557 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-26 03:53:21.824568 | orchestrator | skipping: [testbed-manager] 2026-03-26 03:53:21.824579 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:53:21.824590 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:53:21.824601 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:53:21.824612 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-26 03:53:21.824623 | orchestrator | skipping: [testbed-node-3] 2026-03-26 03:53:21.824634 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-26 03:53:21.824646 | orchestrator | skipping: [testbed-node-4] 2026-03-26 03:53:21.824658 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-26 03:53:21.824669 | orchestrator | skipping: [testbed-node-5] 2026-03-26 03:53:21.824690 | orchestrator | 2026-03-26 03:53:21.824701 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2026-03-26 03:53:21.824713 | orchestrator | Thursday 26 March 2026 03:53:14 +0000 (0:00:01.614) 0:01:23.654 ******** 2026-03-26 03:53:21.824723 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-26 03:53:21.824767 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:53:21.824779 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-26 03:53:21.824790 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:53:21.824802 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-26 03:53:21.824819 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:53:21.824832 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-26 03:53:21.824848 | orchestrator | skipping: [testbed-node-3] 2026-03-26 03:53:21.824864 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-26 03:53:21.824879 | orchestrator | skipping: [testbed-node-4] 2026-03-26 03:53:21.824895 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-26 03:53:21.824914 | orchestrator | skipping: [testbed-node-5] 2026-03-26 03:53:21.824930 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2026-03-26 03:53:21.824948 | orchestrator | 2026-03-26 03:53:21.824960 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2026-03-26 03:53:21.824972 | orchestrator | Thursday 26 March 2026 03:53:16 +0000 (0:00:01.677) 0:01:25.332 ******** 2026-03-26 03:53:21.824983 | orchestrator | [WARNING]: Skipped 2026-03-26 03:53:21.824996 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2026-03-26 03:53:21.825006 | orchestrator | due to this access issue: 2026-03-26 03:53:21.825018 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2026-03-26 03:53:21.825029 | orchestrator | not a directory 2026-03-26 03:53:21.825040 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-26 03:53:21.825051 | orchestrator | 2026-03-26 03:53:21.825069 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2026-03-26 03:53:21.825080 | orchestrator | Thursday 26 March 2026 03:53:17 +0000 (0:00:01.186) 0:01:26.518 ******** 2026-03-26 03:53:21.825091 | orchestrator | skipping: [testbed-manager] 2026-03-26 03:53:21.825102 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:53:21.825113 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:53:21.825124 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:53:21.825135 | orchestrator | skipping: [testbed-node-3] 2026-03-26 03:53:21.825146 | orchestrator | skipping: [testbed-node-4] 2026-03-26 03:53:21.825156 | orchestrator | skipping: [testbed-node-5] 2026-03-26 03:53:21.825233 | orchestrator | 2026-03-26 03:53:21.825245 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2026-03-26 03:53:21.825255 | orchestrator | Thursday 26 March 2026 03:53:18 +0000 (0:00:01.015) 0:01:27.534 ******** 2026-03-26 03:53:21.825265 | orchestrator | skipping: [testbed-manager] 2026-03-26 03:53:21.825274 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:53:21.825284 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:53:21.825294 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:53:21.825303 | orchestrator | skipping: [testbed-node-3] 2026-03-26 03:53:21.825313 | orchestrator | skipping: [testbed-node-4] 2026-03-26 03:53:21.825323 | orchestrator | skipping: [testbed-node-5] 2026-03-26 03:53:21.825332 | orchestrator | 2026-03-26 03:53:21.825342 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2026-03-26 03:53:21.825352 | orchestrator | Thursday 26 March 2026 03:53:19 +0000 (0:00:00.943) 0:01:28.478 ******** 2026-03-26 03:53:21.825386 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-26 03:53:23.462090 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-26 03:53:23.462207 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-26 03:53:23.462229 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-26 03:53:23.462249 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-26 03:53:23.462286 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-26 03:53:23.462304 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-26 03:53:23.462352 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 03:53:23.462393 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 03:53:23.462411 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 03:53:23.462428 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-26 03:53:23.462445 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-26 03:53:23.462464 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-26 03:53:23.462489 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-26 03:53:23.462508 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 03:53:23.462546 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-26 03:53:25.522089 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 03:53:25.522176 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 03:53:25.522186 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-26 03:53:25.522195 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-26 03:53:25.522215 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-26 03:53:25.522223 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-26 03:53:25.522263 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-26 03:53:25.522272 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-26 03:53:25.522279 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-26 03:53:25.522286 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 03:53:25.522292 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 03:53:25.522303 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 03:53:25.522316 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 03:53:25.522323 | orchestrator | 2026-03-26 03:53:25.522331 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2026-03-26 03:53:25.522339 | orchestrator | Thursday 26 March 2026 03:53:23 +0000 (0:00:04.294) 0:01:32.772 ******** 2026-03-26 03:53:25.522347 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-03-26 03:53:25.522353 | orchestrator | skipping: [testbed-manager] 2026-03-26 03:53:25.522360 | orchestrator | 2026-03-26 03:53:25.522367 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-26 03:53:25.522374 | orchestrator | Thursday 26 March 2026 03:53:24 +0000 (0:00:01.479) 0:01:34.252 ******** 2026-03-26 03:53:25.522380 | orchestrator | 2026-03-26 03:53:25.522387 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-26 03:53:25.522393 | orchestrator | Thursday 26 March 2026 03:53:25 +0000 (0:00:00.082) 0:01:34.335 ******** 2026-03-26 03:53:25.522399 | orchestrator | 2026-03-26 03:53:25.522406 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-26 03:53:25.522412 | orchestrator | Thursday 26 March 2026 03:53:25 +0000 (0:00:00.077) 0:01:34.412 ******** 2026-03-26 03:53:25.522418 | orchestrator | 2026-03-26 03:53:25.522425 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-26 03:53:25.522435 | orchestrator | Thursday 26 March 2026 03:53:25 +0000 (0:00:00.074) 0:01:34.486 ******** 2026-03-26 03:55:07.879181 | orchestrator | 2026-03-26 03:55:07.879307 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-26 03:55:07.879327 | orchestrator | Thursday 26 March 2026 03:53:25 +0000 (0:00:00.080) 0:01:34.567 ******** 2026-03-26 03:55:07.879334 | orchestrator | 2026-03-26 03:55:07.879340 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-26 03:55:07.879347 | orchestrator | Thursday 26 March 2026 03:53:25 +0000 (0:00:00.070) 0:01:34.637 ******** 2026-03-26 03:55:07.879353 | orchestrator | 2026-03-26 03:55:07.879360 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-26 03:55:07.879367 | orchestrator | Thursday 26 March 2026 03:53:25 +0000 (0:00:00.082) 0:01:34.719 ******** 2026-03-26 03:55:07.879375 | orchestrator | 2026-03-26 03:55:07.879382 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2026-03-26 03:55:07.879388 | orchestrator | Thursday 26 March 2026 03:53:25 +0000 (0:00:00.097) 0:01:34.817 ******** 2026-03-26 03:55:07.879394 | orchestrator | changed: [testbed-manager] 2026-03-26 03:55:07.879401 | orchestrator | 2026-03-26 03:55:07.879407 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2026-03-26 03:55:07.879413 | orchestrator | Thursday 26 March 2026 03:53:47 +0000 (0:00:21.534) 0:01:56.352 ******** 2026-03-26 03:55:07.879419 | orchestrator | changed: [testbed-manager] 2026-03-26 03:55:07.879425 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:55:07.879430 | orchestrator | changed: [testbed-node-4] 2026-03-26 03:55:07.879436 | orchestrator | changed: [testbed-node-5] 2026-03-26 03:55:07.879442 | orchestrator | changed: [testbed-node-1] 2026-03-26 03:55:07.879448 | orchestrator | changed: [testbed-node-2] 2026-03-26 03:55:07.879455 | orchestrator | changed: [testbed-node-3] 2026-03-26 03:55:07.879460 | orchestrator | 2026-03-26 03:55:07.879466 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2026-03-26 03:55:07.879472 | orchestrator | Thursday 26 March 2026 03:54:00 +0000 (0:00:13.813) 0:02:10.165 ******** 2026-03-26 03:55:07.879500 | orchestrator | changed: [testbed-node-1] 2026-03-26 03:55:07.879506 | orchestrator | changed: [testbed-node-2] 2026-03-26 03:55:07.879511 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:55:07.879517 | orchestrator | 2026-03-26 03:55:07.879522 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2026-03-26 03:55:07.879530 | orchestrator | Thursday 26 March 2026 03:54:10 +0000 (0:00:10.085) 0:02:20.251 ******** 2026-03-26 03:55:07.879535 | orchestrator | changed: [testbed-node-1] 2026-03-26 03:55:07.879541 | orchestrator | changed: [testbed-node-2] 2026-03-26 03:55:07.879546 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:55:07.879551 | orchestrator | 2026-03-26 03:55:07.879557 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2026-03-26 03:55:07.879563 | orchestrator | Thursday 26 March 2026 03:54:21 +0000 (0:00:10.498) 0:02:30.749 ******** 2026-03-26 03:55:07.879568 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:55:07.879573 | orchestrator | changed: [testbed-node-3] 2026-03-26 03:55:07.879579 | orchestrator | changed: [testbed-node-5] 2026-03-26 03:55:07.879585 | orchestrator | changed: [testbed-node-4] 2026-03-26 03:55:07.879591 | orchestrator | changed: [testbed-node-2] 2026-03-26 03:55:07.879597 | orchestrator | changed: [testbed-manager] 2026-03-26 03:55:07.879604 | orchestrator | changed: [testbed-node-1] 2026-03-26 03:55:07.879611 | orchestrator | 2026-03-26 03:55:07.879617 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2026-03-26 03:55:07.879624 | orchestrator | Thursday 26 March 2026 03:54:36 +0000 (0:00:14.650) 0:02:45.400 ******** 2026-03-26 03:55:07.879630 | orchestrator | changed: [testbed-manager] 2026-03-26 03:55:07.879636 | orchestrator | 2026-03-26 03:55:07.879643 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2026-03-26 03:55:07.879665 | orchestrator | Thursday 26 March 2026 03:54:45 +0000 (0:00:08.962) 0:02:54.362 ******** 2026-03-26 03:55:07.879672 | orchestrator | changed: [testbed-node-1] 2026-03-26 03:55:07.879678 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:55:07.879685 | orchestrator | changed: [testbed-node-2] 2026-03-26 03:55:07.879691 | orchestrator | 2026-03-26 03:55:07.879697 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2026-03-26 03:55:07.879704 | orchestrator | Thursday 26 March 2026 03:54:50 +0000 (0:00:05.845) 0:03:00.208 ******** 2026-03-26 03:55:07.879710 | orchestrator | changed: [testbed-manager] 2026-03-26 03:55:07.879717 | orchestrator | 2026-03-26 03:55:07.879723 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2026-03-26 03:55:07.879758 | orchestrator | Thursday 26 March 2026 03:54:56 +0000 (0:00:05.838) 0:03:06.047 ******** 2026-03-26 03:55:07.879764 | orchestrator | changed: [testbed-node-4] 2026-03-26 03:55:07.879771 | orchestrator | changed: [testbed-node-5] 2026-03-26 03:55:07.879776 | orchestrator | changed: [testbed-node-3] 2026-03-26 03:55:07.879781 | orchestrator | 2026-03-26 03:55:07.879787 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-26 03:55:07.879795 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-03-26 03:55:07.879803 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-26 03:55:07.879808 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-26 03:55:07.879814 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-26 03:55:07.879820 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-26 03:55:07.879862 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-26 03:55:07.879881 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-26 03:55:07.879888 | orchestrator | 2026-03-26 03:55:07.879893 | orchestrator | 2026-03-26 03:55:07.879899 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-26 03:55:07.879905 | orchestrator | Thursday 26 March 2026 03:55:07 +0000 (0:00:10.524) 0:03:16.572 ******** 2026-03-26 03:55:07.879910 | orchestrator | =============================================================================== 2026-03-26 03:55:07.879916 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 25.82s 2026-03-26 03:55:07.879922 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 21.53s 2026-03-26 03:55:07.879928 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 17.84s 2026-03-26 03:55:07.879934 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 14.65s 2026-03-26 03:55:07.879940 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 13.81s 2026-03-26 03:55:07.879947 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 10.52s 2026-03-26 03:55:07.879953 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ----------- 10.50s 2026-03-26 03:55:07.879960 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container -------------- 10.09s 2026-03-26 03:55:07.879966 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 8.96s 2026-03-26 03:55:07.879973 | orchestrator | prometheus : Copying over config.json files ----------------------------- 6.93s 2026-03-26 03:55:07.879980 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container -------- 5.85s 2026-03-26 03:55:07.879987 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 5.84s 2026-03-26 03:55:07.879994 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 5.61s 2026-03-26 03:55:07.880001 | orchestrator | prometheus : Check prometheus containers -------------------------------- 4.29s 2026-03-26 03:55:07.880007 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 3.02s 2026-03-26 03:55:07.880013 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 2.94s 2026-03-26 03:55:07.880020 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 2.39s 2026-03-26 03:55:07.880026 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS key --- 2.37s 2026-03-26 03:55:07.880032 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS certificate --- 2.18s 2026-03-26 03:55:07.880038 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 2.03s 2026-03-26 03:55:10.262119 | orchestrator | 2026-03-26 03:55:10 | INFO  | Task 00c4bf35-4c60-4b40-be13-05f79dea06c0 (grafana) was prepared for execution. 2026-03-26 03:55:10.262277 | orchestrator | 2026-03-26 03:55:10 | INFO  | It takes a moment until task 00c4bf35-4c60-4b40-be13-05f79dea06c0 (grafana) has been started and output is visible here. 2026-03-26 03:55:20.106529 | orchestrator | 2026-03-26 03:55:20.106616 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-26 03:55:20.106623 | orchestrator | 2026-03-26 03:55:20.106628 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-26 03:55:20.106633 | orchestrator | Thursday 26 March 2026 03:55:14 +0000 (0:00:00.255) 0:00:00.255 ******** 2026-03-26 03:55:20.106638 | orchestrator | ok: [testbed-node-0] 2026-03-26 03:55:20.106643 | orchestrator | ok: [testbed-node-1] 2026-03-26 03:55:20.106647 | orchestrator | ok: [testbed-node-2] 2026-03-26 03:55:20.106651 | orchestrator | 2026-03-26 03:55:20.106655 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-26 03:55:20.106660 | orchestrator | Thursday 26 March 2026 03:55:14 +0000 (0:00:00.375) 0:00:00.630 ******** 2026-03-26 03:55:20.106664 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2026-03-26 03:55:20.106686 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2026-03-26 03:55:20.106696 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2026-03-26 03:55:20.106703 | orchestrator | 2026-03-26 03:55:20.106709 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2026-03-26 03:55:20.106715 | orchestrator | 2026-03-26 03:55:20.106721 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-03-26 03:55:20.106788 | orchestrator | Thursday 26 March 2026 03:55:14 +0000 (0:00:00.484) 0:00:01.114 ******** 2026-03-26 03:55:20.106797 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 03:55:20.106804 | orchestrator | 2026-03-26 03:55:20.106811 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2026-03-26 03:55:20.106817 | orchestrator | Thursday 26 March 2026 03:55:15 +0000 (0:00:00.605) 0:00:01.720 ******** 2026-03-26 03:55:20.106828 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-26 03:55:20.106835 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-26 03:55:20.106840 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-26 03:55:20.106844 | orchestrator | 2026-03-26 03:55:20.106851 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2026-03-26 03:55:20.106858 | orchestrator | Thursday 26 March 2026 03:55:16 +0000 (0:00:00.923) 0:00:02.643 ******** 2026-03-26 03:55:20.106864 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2026-03-26 03:55:20.106872 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2026-03-26 03:55:20.106879 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-26 03:55:20.106885 | orchestrator | 2026-03-26 03:55:20.106891 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-03-26 03:55:20.106897 | orchestrator | Thursday 26 March 2026 03:55:17 +0000 (0:00:00.910) 0:00:03.554 ******** 2026-03-26 03:55:20.106903 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 03:55:20.106917 | orchestrator | 2026-03-26 03:55:20.106923 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2026-03-26 03:55:20.106929 | orchestrator | Thursday 26 March 2026 03:55:18 +0000 (0:00:00.642) 0:00:04.196 ******** 2026-03-26 03:55:20.106956 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-26 03:55:20.106964 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-26 03:55:20.106970 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-26 03:55:20.106977 | orchestrator | 2026-03-26 03:55:20.106983 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2026-03-26 03:55:20.106989 | orchestrator | Thursday 26 March 2026 03:55:19 +0000 (0:00:01.379) 0:00:05.575 ******** 2026-03-26 03:55:20.106996 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-26 03:55:20.107003 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-26 03:55:20.107015 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:55:20.107022 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:55:20.107038 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-26 03:55:27.563695 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:55:27.563939 | orchestrator | 2026-03-26 03:55:27.563972 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2026-03-26 03:55:27.563993 | orchestrator | Thursday 26 March 2026 03:55:20 +0000 (0:00:00.638) 0:00:06.214 ******** 2026-03-26 03:55:27.564008 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-26 03:55:27.564024 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:55:27.564036 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-26 03:55:27.564048 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:55:27.564060 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-26 03:55:27.564072 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:55:27.564083 | orchestrator | 2026-03-26 03:55:27.564095 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2026-03-26 03:55:27.564106 | orchestrator | Thursday 26 March 2026 03:55:20 +0000 (0:00:00.666) 0:00:06.881 ******** 2026-03-26 03:55:27.564132 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-26 03:55:27.564180 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-26 03:55:27.564231 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-26 03:55:27.564248 | orchestrator | 2026-03-26 03:55:27.564269 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2026-03-26 03:55:27.564289 | orchestrator | Thursday 26 March 2026 03:55:22 +0000 (0:00:01.369) 0:00:08.251 ******** 2026-03-26 03:55:27.564308 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-26 03:55:27.564329 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-26 03:55:27.564349 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-26 03:55:27.564381 | orchestrator | 2026-03-26 03:55:27.564402 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2026-03-26 03:55:27.564424 | orchestrator | Thursday 26 March 2026 03:55:23 +0000 (0:00:01.769) 0:00:10.021 ******** 2026-03-26 03:55:27.564445 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:55:27.564465 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:55:27.564486 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:55:27.564505 | orchestrator | 2026-03-26 03:55:27.564524 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2026-03-26 03:55:27.564544 | orchestrator | Thursday 26 March 2026 03:55:24 +0000 (0:00:00.390) 0:00:10.411 ******** 2026-03-26 03:55:27.564563 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-03-26 03:55:27.564584 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-03-26 03:55:27.564605 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-03-26 03:55:27.564624 | orchestrator | 2026-03-26 03:55:27.564638 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2026-03-26 03:55:27.564649 | orchestrator | Thursday 26 March 2026 03:55:25 +0000 (0:00:01.360) 0:00:11.771 ******** 2026-03-26 03:55:27.564661 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-03-26 03:55:27.564672 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-03-26 03:55:27.564691 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-03-26 03:55:27.564702 | orchestrator | 2026-03-26 03:55:27.564713 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2026-03-26 03:55:27.564778 | orchestrator | Thursday 26 March 2026 03:55:27 +0000 (0:00:01.892) 0:00:13.664 ******** 2026-03-26 03:55:34.339228 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-26 03:55:34.339301 | orchestrator | 2026-03-26 03:55:34.339307 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2026-03-26 03:55:34.339313 | orchestrator | Thursday 26 March 2026 03:55:28 +0000 (0:00:00.809) 0:00:14.474 ******** 2026-03-26 03:55:34.339317 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2026-03-26 03:55:34.339323 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2026-03-26 03:55:34.339327 | orchestrator | ok: [testbed-node-0] 2026-03-26 03:55:34.339333 | orchestrator | ok: [testbed-node-1] 2026-03-26 03:55:34.339337 | orchestrator | ok: [testbed-node-2] 2026-03-26 03:55:34.339341 | orchestrator | 2026-03-26 03:55:34.339345 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2026-03-26 03:55:34.339349 | orchestrator | Thursday 26 March 2026 03:55:29 +0000 (0:00:00.739) 0:00:15.213 ******** 2026-03-26 03:55:34.339353 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:55:34.339357 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:55:34.339361 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:55:34.339365 | orchestrator | 2026-03-26 03:55:34.339369 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2026-03-26 03:55:34.339373 | orchestrator | Thursday 26 March 2026 03:55:29 +0000 (0:00:00.378) 0:00:15.593 ******** 2026-03-26 03:55:34.339379 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1100632, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.732278, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-26 03:55:34.339403 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1100632, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.732278, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-26 03:55:34.339407 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1100632, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.732278, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-26 03:55:34.339413 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1100710, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.7492785, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-26 03:55:34.339437 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1100710, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.7492785, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-26 03:55:34.339441 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1100710, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.7492785, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-26 03:55:34.339445 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1100655, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.7379594, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-26 03:55:34.339453 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1100655, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.7379594, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-26 03:55:34.339457 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1100655, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.7379594, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-26 03:55:34.339461 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1100712, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.751896, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-26 03:55:34.339467 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1100712, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.751896, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-26 03:55:34.339515 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1100712, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.751896, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-26 03:55:38.186907 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1100679, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.7423134, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-26 03:55:38.187071 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1100679, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.7423134, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-26 03:55:38.187091 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1100679, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.7423134, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-26 03:55:38.187101 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1100701, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.7472785, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-26 03:55:38.187112 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1100701, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.7472785, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-26 03:55:38.187137 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1100701, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.7472785, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-26 03:55:38.187166 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1100630, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.731227, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-26 03:55:38.187176 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1100630, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.731227, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-26 03:55:38.187190 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1100630, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.731227, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-26 03:55:38.187199 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1100641, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.7362783, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-26 03:55:38.187208 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1100641, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.7362783, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-26 03:55:38.187221 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1100641, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.7362783, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-26 03:55:38.187237 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1100659, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.7392783, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-26 03:55:42.588156 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1100659, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.7392783, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-26 03:55:42.588252 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1100659, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.7392783, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-26 03:55:42.588259 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1100691, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.744052, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-26 03:55:42.588265 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1100691, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.744052, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-26 03:55:42.588269 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1100691, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.744052, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-26 03:55:42.588284 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1100709, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.7492785, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-26 03:55:42.588330 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1100709, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.7492785, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-26 03:55:42.588346 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1100709, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.7492785, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-26 03:55:42.588350 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1100651, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.7372782, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-26 03:55:42.588355 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1100651, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.7372782, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-26 03:55:42.588359 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1100651, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.7372782, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-26 03:55:42.588366 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1100698, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.7450945, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-26 03:55:42.588375 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1100698, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.7450945, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-26 03:55:46.673104 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1100698, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.7450945, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-26 03:55:46.673198 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1100686, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.7433124, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-26 03:55:46.673213 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1100686, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.7433124, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-26 03:55:46.673221 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1100686, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.7433124, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-26 03:55:46.673240 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1100676, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.741759, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-26 03:55:46.673249 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1100676, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.741759, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-26 03:55:46.673287 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1100676, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.741759, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-26 03:55:46.673295 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1100672, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.7411568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-26 03:55:46.673302 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1100672, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.7411568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-26 03:55:46.673308 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1100672, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.7411568, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-26 03:55:46.673315 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1100694, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.744788, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-26 03:55:46.673325 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1100694, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.744788, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-26 03:55:46.673349 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1100694, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.744788, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-26 03:55:50.713404 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1100665, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.7403498, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-26 03:55:50.713504 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1100665, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.7403498, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-26 03:55:50.713516 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1100665, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.7403498, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-26 03:55:50.713524 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1100707, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.7482784, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-26 03:55:50.713545 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1100707, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.7482784, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-26 03:55:50.713571 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1100707, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.7482784, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-26 03:55:50.713591 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1100847, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.82828, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-26 03:55:50.713598 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1100847, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.82828, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-26 03:55:50.713605 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1100847, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.82828, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-26 03:55:50.713611 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1100750, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.783826, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-26 03:55:50.713621 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1100750, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.783826, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-26 03:55:50.713635 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1100750, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.783826, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-26 03:55:50.713645 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1100729, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.756929, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-26 03:55:54.657958 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1100729, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.756929, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-26 03:55:54.658185 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1100729, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.756929, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-26 03:55:54.658234 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1100796, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.7882793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-26 03:55:54.658255 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1100796, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.7882793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-26 03:55:54.658322 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1100796, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.7882793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-26 03:55:54.658345 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1100722, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.7527258, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-26 03:55:54.658393 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1100722, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.7527258, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-26 03:55:54.658416 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1100722, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.7527258, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-26 03:55:54.658436 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1100838, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.8142796, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-26 03:55:54.658458 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1100838, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.8142796, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-26 03:55:54.658498 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1100838, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.8142796, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-26 03:55:54.658520 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1100797, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.8092794, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-26 03:55:54.658553 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1100797, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.8092794, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-26 03:55:58.524413 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1100797, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.8092794, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-26 03:55:58.524503 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1100839, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.8152797, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-26 03:55:58.524516 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1100839, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.8152797, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-26 03:55:58.524555 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1100839, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.8152797, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-26 03:55:58.524562 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1100844, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.8262799, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-26 03:55:58.524569 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1100844, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.8262799, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-26 03:55:58.524586 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1100844, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.8262799, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-26 03:55:58.524593 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1100837, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.8132796, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-26 03:55:58.524599 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1100837, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.8132796, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-26 03:55:58.524611 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1100837, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.8132796, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-26 03:55:58.524616 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1100790, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.7862792, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-26 03:55:58.524621 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1100790, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.7862792, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-26 03:55:58.524668 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1100790, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.7862792, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-26 03:56:02.820861 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1100747, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.761357, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-26 03:56:02.820966 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1100747, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.761357, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-26 03:56:02.821003 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1100747, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.761357, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-26 03:56:02.821027 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1100787, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.784279, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-26 03:56:02.821037 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1100787, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.784279, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-26 03:56:02.821046 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1100787, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.784279, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-26 03:56:02.821071 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1100739, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.7602787, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-26 03:56:02.821080 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1100739, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.7602787, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-26 03:56:02.821095 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1100793, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.7862792, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-26 03:56:02.821108 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1100739, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.7602787, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-26 03:56:02.821117 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1100793, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.7862792, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-26 03:56:02.821127 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1100793, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.7862792, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-26 03:56:02.821181 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1100842, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.8242798, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-26 03:56:06.758522 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1100842, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.8242798, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-26 03:56:06.758647 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1100842, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.8242798, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-26 03:56:06.758673 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1100841, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.8192797, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-26 03:56:06.758685 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1100841, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.8192797, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-26 03:56:06.758695 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1100841, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.8192797, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-26 03:56:06.758704 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1100724, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.7542787, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-26 03:56:06.758814 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1100724, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.7542787, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-26 03:56:06.758847 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1100724, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.7542787, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-26 03:56:06.758870 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1100726, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.7552786, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-26 03:56:06.758887 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1100726, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.7552786, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-26 03:56:06.758903 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1100831, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.8122797, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-26 03:56:06.758918 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1100726, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.7552786, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-26 03:56:06.758938 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1100831, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.8122797, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-26 03:57:47.094969 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1100840, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.8162796, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-26 03:57:47.095108 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1100831, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.8122797, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-26 03:57:47.095122 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1100840, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.8162796, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-26 03:57:47.095131 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1100840, 'dev': 145, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1774489902.8162796, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-26 03:57:47.095138 | orchestrator | 2026-03-26 03:57:47.095146 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2026-03-26 03:57:47.095155 | orchestrator | Thursday 26 March 2026 03:56:08 +0000 (0:00:38.713) 0:00:54.306 ******** 2026-03-26 03:57:47.095163 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-26 03:57:47.095205 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-26 03:57:47.095213 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-26 03:57:47.095220 | orchestrator | 2026-03-26 03:57:47.095227 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2026-03-26 03:57:47.095235 | orchestrator | Thursday 26 March 2026 03:56:09 +0000 (0:00:01.098) 0:00:55.404 ******** 2026-03-26 03:57:47.095243 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:57:47.095251 | orchestrator | 2026-03-26 03:57:47.095258 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2026-03-26 03:57:47.095265 | orchestrator | Thursday 26 March 2026 03:56:11 +0000 (0:00:02.371) 0:00:57.776 ******** 2026-03-26 03:57:47.095272 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:57:47.095278 | orchestrator | 2026-03-26 03:57:47.095289 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-03-26 03:57:47.095296 | orchestrator | Thursday 26 March 2026 03:56:14 +0000 (0:00:02.393) 0:01:00.169 ******** 2026-03-26 03:57:47.095302 | orchestrator | 2026-03-26 03:57:47.095309 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-03-26 03:57:47.095316 | orchestrator | Thursday 26 March 2026 03:56:14 +0000 (0:00:00.094) 0:01:00.264 ******** 2026-03-26 03:57:47.095322 | orchestrator | 2026-03-26 03:57:47.095329 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-03-26 03:57:47.095336 | orchestrator | Thursday 26 March 2026 03:56:14 +0000 (0:00:00.106) 0:01:00.370 ******** 2026-03-26 03:57:47.095342 | orchestrator | 2026-03-26 03:57:47.095349 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2026-03-26 03:57:47.095356 | orchestrator | Thursday 26 March 2026 03:56:14 +0000 (0:00:00.085) 0:01:00.456 ******** 2026-03-26 03:57:47.095363 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:57:47.095370 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:57:47.095377 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:57:47.095384 | orchestrator | 2026-03-26 03:57:47.095391 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2026-03-26 03:57:47.095398 | orchestrator | Thursday 26 March 2026 03:56:16 +0000 (0:00:02.293) 0:01:02.749 ******** 2026-03-26 03:57:47.095405 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:57:47.095412 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:57:47.095419 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2026-03-26 03:57:47.095427 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2026-03-26 03:57:47.095441 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2026-03-26 03:57:47.095448 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (9 retries left). 2026-03-26 03:57:47.095455 | orchestrator | ok: [testbed-node-0] 2026-03-26 03:57:47.095463 | orchestrator | 2026-03-26 03:57:47.095469 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2026-03-26 03:57:47.095476 | orchestrator | Thursday 26 March 2026 03:57:07 +0000 (0:00:51.148) 0:01:53.898 ******** 2026-03-26 03:57:47.095483 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:57:47.095490 | orchestrator | changed: [testbed-node-1] 2026-03-26 03:57:47.095496 | orchestrator | changed: [testbed-node-2] 2026-03-26 03:57:47.095503 | orchestrator | 2026-03-26 03:57:47.095509 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2026-03-26 03:57:47.095516 | orchestrator | Thursday 26 March 2026 03:57:41 +0000 (0:00:33.842) 0:02:27.740 ******** 2026-03-26 03:57:47.095523 | orchestrator | ok: [testbed-node-0] 2026-03-26 03:57:47.095530 | orchestrator | 2026-03-26 03:57:47.095536 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2026-03-26 03:57:47.095543 | orchestrator | Thursday 26 March 2026 03:57:44 +0000 (0:00:02.374) 0:02:30.115 ******** 2026-03-26 03:57:47.095550 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:57:47.095557 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:57:47.095564 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:57:47.095570 | orchestrator | 2026-03-26 03:57:47.095577 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2026-03-26 03:57:47.095584 | orchestrator | Thursday 26 March 2026 03:57:44 +0000 (0:00:00.356) 0:02:30.471 ******** 2026-03-26 03:57:47.095592 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2026-03-26 03:57:47.095624 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2026-03-26 03:57:47.989206 | orchestrator | 2026-03-26 03:57:47.989327 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2026-03-26 03:57:47.989346 | orchestrator | Thursday 26 March 2026 03:57:47 +0000 (0:00:02.724) 0:02:33.195 ******** 2026-03-26 03:57:47.989359 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:57:47.989372 | orchestrator | 2026-03-26 03:57:47.989383 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-26 03:57:47.989396 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-26 03:57:47.989409 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-26 03:57:47.989420 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-26 03:57:47.989433 | orchestrator | 2026-03-26 03:57:47.989444 | orchestrator | 2026-03-26 03:57:47.989456 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-26 03:57:47.989469 | orchestrator | Thursday 26 March 2026 03:57:47 +0000 (0:00:00.328) 0:02:33.523 ******** 2026-03-26 03:57:47.989481 | orchestrator | =============================================================================== 2026-03-26 03:57:47.989515 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 51.15s 2026-03-26 03:57:47.989529 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 38.71s 2026-03-26 03:57:47.989567 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 33.84s 2026-03-26 03:57:47.989580 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.72s 2026-03-26 03:57:47.989593 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.39s 2026-03-26 03:57:47.989606 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.37s 2026-03-26 03:57:47.989617 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.37s 2026-03-26 03:57:47.989625 | orchestrator | grafana : Restart first grafana container ------------------------------- 2.29s 2026-03-26 03:57:47.989632 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.89s 2026-03-26 03:57:47.989640 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.77s 2026-03-26 03:57:47.989647 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.38s 2026-03-26 03:57:47.989654 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.37s 2026-03-26 03:57:47.989662 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.36s 2026-03-26 03:57:47.989669 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.10s 2026-03-26 03:57:47.989676 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.92s 2026-03-26 03:57:47.989684 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.91s 2026-03-26 03:57:47.989691 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.81s 2026-03-26 03:57:47.989698 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.74s 2026-03-26 03:57:47.989706 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.67s 2026-03-26 03:57:47.989713 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.64s 2026-03-26 03:57:48.431972 | orchestrator | + sh -c /opt/configuration/scripts/deploy/510-clusterapi.sh 2026-03-26 03:57:48.439915 | orchestrator | + set -e 2026-03-26 03:57:48.440001 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-26 03:57:48.440017 | orchestrator | ++ export INTERACTIVE=false 2026-03-26 03:57:48.440029 | orchestrator | ++ INTERACTIVE=false 2026-03-26 03:57:48.440040 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-26 03:57:48.440051 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-26 03:57:48.440062 | orchestrator | + source /opt/manager-vars.sh 2026-03-26 03:57:48.440073 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-26 03:57:48.440084 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-26 03:57:48.440095 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-26 03:57:48.440105 | orchestrator | ++ CEPH_VERSION=reef 2026-03-26 03:57:48.440116 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-26 03:57:48.440127 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-26 03:57:48.440139 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-26 03:57:48.440149 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-26 03:57:48.440162 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-26 03:57:48.440173 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-26 03:57:48.440183 | orchestrator | ++ export ARA=false 2026-03-26 03:57:48.440195 | orchestrator | ++ ARA=false 2026-03-26 03:57:48.440206 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-26 03:57:48.440217 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-26 03:57:48.440227 | orchestrator | ++ export TEMPEST=false 2026-03-26 03:57:48.440238 | orchestrator | ++ TEMPEST=false 2026-03-26 03:57:48.440249 | orchestrator | ++ export IS_ZUUL=true 2026-03-26 03:57:48.440259 | orchestrator | ++ IS_ZUUL=true 2026-03-26 03:57:48.440270 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.54 2026-03-26 03:57:48.440282 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.54 2026-03-26 03:57:48.440292 | orchestrator | ++ export EXTERNAL_API=false 2026-03-26 03:57:48.440303 | orchestrator | ++ EXTERNAL_API=false 2026-03-26 03:57:48.440314 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-26 03:57:48.440324 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-26 03:57:48.440336 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-26 03:57:48.440347 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-26 03:57:48.440357 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-26 03:57:48.440368 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-26 03:57:48.440914 | orchestrator | ++ semver 9.5.0 8.0.0 2026-03-26 03:57:48.518210 | orchestrator | + [[ 1 -ge 0 ]] 2026-03-26 03:57:48.518296 | orchestrator | + osism apply clusterapi 2026-03-26 03:57:50.877807 | orchestrator | 2026-03-26 03:57:50 | INFO  | Task 8d0762b2-3f92-45d4-a9d6-72554dc03cb3 (clusterapi) was prepared for execution. 2026-03-26 03:57:50.877892 | orchestrator | 2026-03-26 03:57:50 | INFO  | It takes a moment until task 8d0762b2-3f92-45d4-a9d6-72554dc03cb3 (clusterapi) has been started and output is visible here. 2026-03-26 03:58:47.097113 | orchestrator | 2026-03-26 03:58:47.097216 | orchestrator | PLAY [Apply cert_manager role] ************************************************* 2026-03-26 03:58:47.097227 | orchestrator | 2026-03-26 03:58:47.097234 | orchestrator | TASK [Include cert_manager role] *********************************************** 2026-03-26 03:58:47.097242 | orchestrator | Thursday 26 March 2026 03:57:55 +0000 (0:00:00.208) 0:00:00.208 ******** 2026-03-26 03:58:47.097250 | orchestrator | included: cert_manager for testbed-manager 2026-03-26 03:58:47.097256 | orchestrator | 2026-03-26 03:58:47.097263 | orchestrator | TASK [cert_manager : Deploy cert-manager crds] ********************************* 2026-03-26 03:58:47.097269 | orchestrator | Thursday 26 March 2026 03:57:56 +0000 (0:00:00.262) 0:00:00.470 ******** 2026-03-26 03:58:47.097276 | orchestrator | changed: [testbed-manager] 2026-03-26 03:58:47.097283 | orchestrator | 2026-03-26 03:58:47.097290 | orchestrator | TASK [cert_manager : Deploy cert-manager] ************************************** 2026-03-26 03:58:47.097296 | orchestrator | Thursday 26 March 2026 03:58:01 +0000 (0:00:05.815) 0:00:06.285 ******** 2026-03-26 03:58:47.097302 | orchestrator | changed: [testbed-manager] 2026-03-26 03:58:47.097309 | orchestrator | 2026-03-26 03:58:47.097315 | orchestrator | PLAY [Initialize or upgrade the CAPI management cluster] *********************** 2026-03-26 03:58:47.097321 | orchestrator | 2026-03-26 03:58:47.097328 | orchestrator | TASK [Get capi-system namespace phase] ***************************************** 2026-03-26 03:58:47.097334 | orchestrator | Thursday 26 March 2026 03:58:26 +0000 (0:00:24.656) 0:00:30.942 ******** 2026-03-26 03:58:47.097340 | orchestrator | ok: [testbed-manager] 2026-03-26 03:58:47.097347 | orchestrator | 2026-03-26 03:58:47.097354 | orchestrator | TASK [Set capi-system-phase fact] ********************************************** 2026-03-26 03:58:47.097362 | orchestrator | Thursday 26 March 2026 03:58:27 +0000 (0:00:01.152) 0:00:32.094 ******** 2026-03-26 03:58:47.097379 | orchestrator | ok: [testbed-manager] 2026-03-26 03:58:47.097383 | orchestrator | 2026-03-26 03:58:47.097387 | orchestrator | TASK [Initialize the CAPI management cluster] ********************************** 2026-03-26 03:58:47.097391 | orchestrator | Thursday 26 March 2026 03:58:27 +0000 (0:00:00.170) 0:00:32.265 ******** 2026-03-26 03:58:47.097395 | orchestrator | ok: [testbed-manager] 2026-03-26 03:58:47.097399 | orchestrator | 2026-03-26 03:58:47.097403 | orchestrator | TASK [Upgrade the CAPI management cluster] ************************************* 2026-03-26 03:58:47.097407 | orchestrator | Thursday 26 March 2026 03:58:44 +0000 (0:00:16.373) 0:00:48.639 ******** 2026-03-26 03:58:47.097411 | orchestrator | skipping: [testbed-manager] 2026-03-26 03:58:47.097415 | orchestrator | 2026-03-26 03:58:47.097419 | orchestrator | TASK [Install openstack-resource-controller] *********************************** 2026-03-26 03:58:47.097423 | orchestrator | Thursday 26 March 2026 03:58:44 +0000 (0:00:00.169) 0:00:48.808 ******** 2026-03-26 03:58:47.097426 | orchestrator | changed: [testbed-manager] 2026-03-26 03:58:47.097430 | orchestrator | 2026-03-26 03:58:47.097434 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-26 03:58:47.097439 | orchestrator | testbed-manager : ok=7  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-26 03:58:47.097443 | orchestrator | 2026-03-26 03:58:47.097462 | orchestrator | 2026-03-26 03:58:47.097469 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-26 03:58:47.097483 | orchestrator | Thursday 26 March 2026 03:58:46 +0000 (0:00:02.321) 0:00:51.130 ******** 2026-03-26 03:58:47.097490 | orchestrator | =============================================================================== 2026-03-26 03:58:47.097497 | orchestrator | cert_manager : Deploy cert-manager ------------------------------------- 24.66s 2026-03-26 03:58:47.097526 | orchestrator | Initialize the CAPI management cluster --------------------------------- 16.37s 2026-03-26 03:58:47.097531 | orchestrator | cert_manager : Deploy cert-manager crds --------------------------------- 5.82s 2026-03-26 03:58:47.097534 | orchestrator | Install openstack-resource-controller ----------------------------------- 2.32s 2026-03-26 03:58:47.097538 | orchestrator | Get capi-system namespace phase ----------------------------------------- 1.15s 2026-03-26 03:58:47.097542 | orchestrator | Include cert_manager role ----------------------------------------------- 0.26s 2026-03-26 03:58:47.097546 | orchestrator | Set capi-system-phase fact ---------------------------------------------- 0.17s 2026-03-26 03:58:47.097550 | orchestrator | Upgrade the CAPI management cluster ------------------------------------- 0.17s 2026-03-26 03:58:47.470230 | orchestrator | + osism apply magnum 2026-03-26 03:58:49.592842 | orchestrator | 2026-03-26 03:58:49 | INFO  | Task 74b26c9e-e173-4941-b638-fff6166fc4d3 (magnum) was prepared for execution. 2026-03-26 03:58:49.592928 | orchestrator | 2026-03-26 03:58:49 | INFO  | It takes a moment until task 74b26c9e-e173-4941-b638-fff6166fc4d3 (magnum) has been started and output is visible here. 2026-03-26 03:59:33.703467 | orchestrator | 2026-03-26 03:59:33.703608 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-26 03:59:33.703628 | orchestrator | 2026-03-26 03:59:33.703641 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-26 03:59:33.703654 | orchestrator | Thursday 26 March 2026 03:58:54 +0000 (0:00:00.301) 0:00:00.301 ******** 2026-03-26 03:59:33.703666 | orchestrator | ok: [testbed-node-0] 2026-03-26 03:59:33.703678 | orchestrator | ok: [testbed-node-1] 2026-03-26 03:59:33.703689 | orchestrator | ok: [testbed-node-2] 2026-03-26 03:59:33.703701 | orchestrator | 2026-03-26 03:59:33.703712 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-26 03:59:33.703723 | orchestrator | Thursday 26 March 2026 03:58:54 +0000 (0:00:00.336) 0:00:00.638 ******** 2026-03-26 03:59:33.703773 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2026-03-26 03:59:33.703786 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2026-03-26 03:59:33.703797 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2026-03-26 03:59:33.703808 | orchestrator | 2026-03-26 03:59:33.703820 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2026-03-26 03:59:33.703831 | orchestrator | 2026-03-26 03:59:33.703842 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-03-26 03:59:33.703853 | orchestrator | Thursday 26 March 2026 03:58:55 +0000 (0:00:00.529) 0:00:01.167 ******** 2026-03-26 03:59:33.703865 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 03:59:33.703877 | orchestrator | 2026-03-26 03:59:33.703888 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2026-03-26 03:59:33.703899 | orchestrator | Thursday 26 March 2026 03:58:55 +0000 (0:00:00.669) 0:00:01.837 ******** 2026-03-26 03:59:33.703911 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2026-03-26 03:59:33.703922 | orchestrator | 2026-03-26 03:59:33.703933 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2026-03-26 03:59:33.703944 | orchestrator | Thursday 26 March 2026 03:58:59 +0000 (0:00:03.626) 0:00:05.464 ******** 2026-03-26 03:59:33.703955 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2026-03-26 03:59:33.703967 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2026-03-26 03:59:33.703978 | orchestrator | 2026-03-26 03:59:33.703989 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2026-03-26 03:59:33.704000 | orchestrator | Thursday 26 March 2026 03:59:05 +0000 (0:00:06.472) 0:00:11.936 ******** 2026-03-26 03:59:33.704012 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-26 03:59:33.704023 | orchestrator | 2026-03-26 03:59:33.704060 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2026-03-26 03:59:33.704086 | orchestrator | Thursday 26 March 2026 03:59:09 +0000 (0:00:03.545) 0:00:15.482 ******** 2026-03-26 03:59:33.704098 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-26 03:59:33.704109 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2026-03-26 03:59:33.704119 | orchestrator | 2026-03-26 03:59:33.704130 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2026-03-26 03:59:33.704141 | orchestrator | Thursday 26 March 2026 03:59:13 +0000 (0:00:03.939) 0:00:19.422 ******** 2026-03-26 03:59:33.704152 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-26 03:59:33.704163 | orchestrator | 2026-03-26 03:59:33.704174 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2026-03-26 03:59:33.704185 | orchestrator | Thursday 26 March 2026 03:59:16 +0000 (0:00:03.416) 0:00:22.838 ******** 2026-03-26 03:59:33.704196 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2026-03-26 03:59:33.704207 | orchestrator | 2026-03-26 03:59:33.704218 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2026-03-26 03:59:33.704229 | orchestrator | Thursday 26 March 2026 03:59:21 +0000 (0:00:04.371) 0:00:27.210 ******** 2026-03-26 03:59:33.704240 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:59:33.704251 | orchestrator | 2026-03-26 03:59:33.704262 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2026-03-26 03:59:33.704273 | orchestrator | Thursday 26 March 2026 03:59:24 +0000 (0:00:03.622) 0:00:30.832 ******** 2026-03-26 03:59:33.704284 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:59:33.704295 | orchestrator | 2026-03-26 03:59:33.704306 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2026-03-26 03:59:33.704317 | orchestrator | Thursday 26 March 2026 03:59:28 +0000 (0:00:03.854) 0:00:34.687 ******** 2026-03-26 03:59:33.704328 | orchestrator | changed: [testbed-node-0] 2026-03-26 03:59:33.704339 | orchestrator | 2026-03-26 03:59:33.704350 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2026-03-26 03:59:33.704361 | orchestrator | Thursday 26 March 2026 03:59:32 +0000 (0:00:03.448) 0:00:38.135 ******** 2026-03-26 03:59:33.704407 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-26 03:59:33.704424 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-26 03:59:33.704451 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-26 03:59:33.704463 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-26 03:59:33.704476 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-26 03:59:33.704495 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-26 03:59:41.495406 | orchestrator | 2026-03-26 03:59:41.495512 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2026-03-26 03:59:41.495529 | orchestrator | Thursday 26 March 2026 03:59:33 +0000 (0:00:01.669) 0:00:39.805 ******** 2026-03-26 03:59:41.495540 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:59:41.495552 | orchestrator | 2026-03-26 03:59:41.495562 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2026-03-26 03:59:41.495572 | orchestrator | Thursday 26 March 2026 03:59:33 +0000 (0:00:00.189) 0:00:39.995 ******** 2026-03-26 03:59:41.495582 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:59:41.495592 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:59:41.495602 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:59:41.495633 | orchestrator | 2026-03-26 03:59:41.495644 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2026-03-26 03:59:41.495654 | orchestrator | Thursday 26 March 2026 03:59:34 +0000 (0:00:00.341) 0:00:40.336 ******** 2026-03-26 03:59:41.495663 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-26 03:59:41.495673 | orchestrator | 2026-03-26 03:59:41.495683 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2026-03-26 03:59:41.495693 | orchestrator | Thursday 26 March 2026 03:59:35 +0000 (0:00:00.911) 0:00:41.248 ******** 2026-03-26 03:59:41.495705 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-26 03:59:41.495786 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-26 03:59:41.495801 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-26 03:59:41.495831 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-26 03:59:41.495852 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-26 03:59:41.495863 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-26 03:59:41.495873 | orchestrator | 2026-03-26 03:59:41.495888 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2026-03-26 03:59:41.495898 | orchestrator | Thursday 26 March 2026 03:59:37 +0000 (0:00:02.571) 0:00:43.819 ******** 2026-03-26 03:59:41.495908 | orchestrator | ok: [testbed-node-0] 2026-03-26 03:59:41.495919 | orchestrator | ok: [testbed-node-1] 2026-03-26 03:59:41.495929 | orchestrator | ok: [testbed-node-2] 2026-03-26 03:59:41.495940 | orchestrator | 2026-03-26 03:59:41.495952 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-03-26 03:59:41.495962 | orchestrator | Thursday 26 March 2026 03:59:38 +0000 (0:00:00.548) 0:00:44.367 ******** 2026-03-26 03:59:41.495975 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 03:59:41.495986 | orchestrator | 2026-03-26 03:59:41.495997 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2026-03-26 03:59:41.496007 | orchestrator | Thursday 26 March 2026 03:59:38 +0000 (0:00:00.582) 0:00:44.950 ******** 2026-03-26 03:59:41.496019 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-26 03:59:41.496038 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-26 03:59:42.597198 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-26 03:59:42.597357 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-26 03:59:42.597384 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-26 03:59:42.597402 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-26 03:59:42.597419 | orchestrator | 2026-03-26 03:59:42.597437 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2026-03-26 03:59:42.597455 | orchestrator | Thursday 26 March 2026 03:59:41 +0000 (0:00:02.655) 0:00:47.605 ******** 2026-03-26 03:59:42.597522 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-26 03:59:42.597542 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-26 03:59:42.597559 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:59:42.597585 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-26 03:59:42.597604 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-26 03:59:42.597620 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:59:42.597636 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-26 03:59:42.597675 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-26 03:59:46.260526 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:59:46.260638 | orchestrator | 2026-03-26 03:59:46.260655 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2026-03-26 03:59:46.260665 | orchestrator | Thursday 26 March 2026 03:59:42 +0000 (0:00:01.094) 0:00:48.699 ******** 2026-03-26 03:59:46.260674 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-26 03:59:46.260699 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-26 03:59:46.260706 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:59:46.260713 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-26 03:59:46.260778 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-26 03:59:46.260786 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:59:46.260807 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-26 03:59:46.260813 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-26 03:59:46.260819 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:59:46.260825 | orchestrator | 2026-03-26 03:59:46.260832 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2026-03-26 03:59:46.260842 | orchestrator | Thursday 26 March 2026 03:59:43 +0000 (0:00:00.934) 0:00:49.633 ******** 2026-03-26 03:59:46.260849 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-26 03:59:46.260856 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-26 03:59:46.260873 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-26 03:59:52.796359 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-26 03:59:52.796518 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-26 03:59:52.796547 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-26 03:59:52.796608 | orchestrator | 2026-03-26 03:59:52.796631 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2026-03-26 03:59:52.796652 | orchestrator | Thursday 26 March 2026 03:59:46 +0000 (0:00:02.739) 0:00:52.373 ******** 2026-03-26 03:59:52.796670 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-26 03:59:52.796713 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-26 03:59:52.796809 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-26 03:59:52.796839 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-26 03:59:52.796860 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-26 03:59:52.796892 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-26 03:59:52.796912 | orchestrator | 2026-03-26 03:59:52.796931 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2026-03-26 03:59:52.796947 | orchestrator | Thursday 26 March 2026 03:59:51 +0000 (0:00:05.740) 0:00:58.113 ******** 2026-03-26 03:59:52.796977 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-26 03:59:54.784494 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-26 03:59:54.784601 | orchestrator | skipping: [testbed-node-0] 2026-03-26 03:59:54.784634 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-26 03:59:54.784671 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-26 03:59:54.784683 | orchestrator | skipping: [testbed-node-1] 2026-03-26 03:59:54.784693 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-26 03:59:54.784721 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-26 03:59:54.784790 | orchestrator | skipping: [testbed-node-2] 2026-03-26 03:59:54.784800 | orchestrator | 2026-03-26 03:59:54.784810 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2026-03-26 03:59:54.784822 | orchestrator | Thursday 26 March 2026 03:59:52 +0000 (0:00:00.797) 0:00:58.910 ******** 2026-03-26 03:59:54.784841 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-26 03:59:54.784862 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-26 03:59:54.784872 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-26 03:59:54.784882 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-26 03:59:54.784902 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-26 04:00:50.560531 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-26 04:00:50.560694 | orchestrator | 2026-03-26 04:00:50.560726 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-03-26 04:00:50.560844 | orchestrator | Thursday 26 March 2026 03:59:54 +0000 (0:00:01.979) 0:01:00.889 ******** 2026-03-26 04:00:50.560862 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:00:50.560879 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:00:50.560896 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:00:50.560912 | orchestrator | 2026-03-26 04:00:50.560928 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2026-03-26 04:00:50.560945 | orchestrator | Thursday 26 March 2026 03:59:55 +0000 (0:00:00.754) 0:01:01.644 ******** 2026-03-26 04:00:50.560961 | orchestrator | changed: [testbed-node-0] 2026-03-26 04:00:50.560977 | orchestrator | 2026-03-26 04:00:50.560993 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2026-03-26 04:00:50.561009 | orchestrator | Thursday 26 March 2026 03:59:57 +0000 (0:00:02.176) 0:01:03.820 ******** 2026-03-26 04:00:50.561026 | orchestrator | changed: [testbed-node-0] 2026-03-26 04:00:50.561044 | orchestrator | 2026-03-26 04:00:50.561062 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2026-03-26 04:00:50.561080 | orchestrator | Thursday 26 March 2026 04:00:00 +0000 (0:00:02.444) 0:01:06.265 ******** 2026-03-26 04:00:50.561098 | orchestrator | changed: [testbed-node-0] 2026-03-26 04:00:50.561115 | orchestrator | 2026-03-26 04:00:50.561132 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-03-26 04:00:50.561149 | orchestrator | Thursday 26 March 2026 04:00:17 +0000 (0:00:17.263) 0:01:23.529 ******** 2026-03-26 04:00:50.561167 | orchestrator | 2026-03-26 04:00:50.561185 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-03-26 04:00:50.561203 | orchestrator | Thursday 26 March 2026 04:00:17 +0000 (0:00:00.089) 0:01:23.618 ******** 2026-03-26 04:00:50.561218 | orchestrator | 2026-03-26 04:00:50.561229 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-03-26 04:00:50.561241 | orchestrator | Thursday 26 March 2026 04:00:17 +0000 (0:00:00.076) 0:01:23.695 ******** 2026-03-26 04:00:50.561252 | orchestrator | 2026-03-26 04:00:50.561263 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2026-03-26 04:00:50.561274 | orchestrator | Thursday 26 March 2026 04:00:17 +0000 (0:00:00.075) 0:01:23.771 ******** 2026-03-26 04:00:50.561285 | orchestrator | changed: [testbed-node-0] 2026-03-26 04:00:50.561296 | orchestrator | changed: [testbed-node-2] 2026-03-26 04:00:50.561307 | orchestrator | changed: [testbed-node-1] 2026-03-26 04:00:50.561318 | orchestrator | 2026-03-26 04:00:50.561330 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2026-03-26 04:00:50.561341 | orchestrator | Thursday 26 March 2026 04:00:38 +0000 (0:00:20.763) 0:01:44.534 ******** 2026-03-26 04:00:50.561352 | orchestrator | changed: [testbed-node-0] 2026-03-26 04:00:50.561364 | orchestrator | changed: [testbed-node-2] 2026-03-26 04:00:50.561375 | orchestrator | changed: [testbed-node-1] 2026-03-26 04:00:50.561385 | orchestrator | 2026-03-26 04:00:50.561395 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-26 04:00:50.561406 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-26 04:00:50.561417 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-26 04:00:50.561427 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-26 04:00:50.561437 | orchestrator | 2026-03-26 04:00:50.561447 | orchestrator | 2026-03-26 04:00:50.561457 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-26 04:00:50.561478 | orchestrator | Thursday 26 March 2026 04:00:50 +0000 (0:00:11.676) 0:01:56.211 ******** 2026-03-26 04:00:50.561488 | orchestrator | =============================================================================== 2026-03-26 04:00:50.561503 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 20.76s 2026-03-26 04:00:50.561525 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 17.26s 2026-03-26 04:00:50.561547 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 11.68s 2026-03-26 04:00:50.561565 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 6.47s 2026-03-26 04:00:50.561581 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 5.74s 2026-03-26 04:00:50.561597 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 4.37s 2026-03-26 04:00:50.561612 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 3.94s 2026-03-26 04:00:50.561650 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 3.85s 2026-03-26 04:00:50.561669 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.63s 2026-03-26 04:00:50.561685 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.62s 2026-03-26 04:00:50.561700 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.55s 2026-03-26 04:00:50.561718 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.45s 2026-03-26 04:00:50.561761 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.42s 2026-03-26 04:00:50.561777 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.74s 2026-03-26 04:00:50.561787 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.66s 2026-03-26 04:00:50.561806 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.57s 2026-03-26 04:00:50.561816 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.44s 2026-03-26 04:00:50.561826 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.18s 2026-03-26 04:00:50.561835 | orchestrator | magnum : Check magnum containers ---------------------------------------- 1.98s 2026-03-26 04:00:50.561845 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 1.67s 2026-03-26 04:00:51.359278 | orchestrator | ok: Runtime: 1:45:28.286456 2026-03-26 04:00:51.613091 | 2026-03-26 04:00:51.613257 | TASK [Deploy in a nutshell] 2026-03-26 04:00:52.147693 | orchestrator | skipping: Conditional result was False 2026-03-26 04:00:52.172762 | 2026-03-26 04:00:52.172922 | TASK [Bootstrap services] 2026-03-26 04:00:52.927651 | orchestrator | 2026-03-26 04:00:52.927894 | orchestrator | # BOOTSTRAP 2026-03-26 04:00:52.927918 | orchestrator | 2026-03-26 04:00:52.927933 | orchestrator | + set -e 2026-03-26 04:00:52.927946 | orchestrator | + echo 2026-03-26 04:00:52.927960 | orchestrator | + echo '# BOOTSTRAP' 2026-03-26 04:00:52.927978 | orchestrator | + echo 2026-03-26 04:00:52.928023 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2026-03-26 04:00:52.939553 | orchestrator | + set -e 2026-03-26 04:00:52.939652 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2026-03-26 04:00:55.462346 | orchestrator | 2026-03-26 04:00:55 | INFO  | It takes a moment until task 31b0c09a-7ac3-467c-be7c-a759484498ca (flavor-manager) has been started and output is visible here. 2026-03-26 04:01:04.140008 | orchestrator | 2026-03-26 04:00:58 | INFO  | Flavor SCS-1L-1 created 2026-03-26 04:01:04.140184 | orchestrator | 2026-03-26 04:00:59 | INFO  | Flavor SCS-1L-1-5 created 2026-03-26 04:01:04.140218 | orchestrator | 2026-03-26 04:00:59 | INFO  | Flavor SCS-1V-2 created 2026-03-26 04:01:04.140242 | orchestrator | 2026-03-26 04:00:59 | INFO  | Flavor SCS-1V-2-5 created 2026-03-26 04:01:04.140266 | orchestrator | 2026-03-26 04:00:59 | INFO  | Flavor SCS-1V-4 created 2026-03-26 04:01:04.140291 | orchestrator | 2026-03-26 04:00:59 | INFO  | Flavor SCS-1V-4-10 created 2026-03-26 04:01:04.140315 | orchestrator | 2026-03-26 04:01:00 | INFO  | Flavor SCS-1V-8 created 2026-03-26 04:01:04.140340 | orchestrator | 2026-03-26 04:01:00 | INFO  | Flavor SCS-1V-8-20 created 2026-03-26 04:01:04.140381 | orchestrator | 2026-03-26 04:01:00 | INFO  | Flavor SCS-2V-4 created 2026-03-26 04:01:04.140405 | orchestrator | 2026-03-26 04:01:00 | INFO  | Flavor SCS-2V-4-10 created 2026-03-26 04:01:04.140430 | orchestrator | 2026-03-26 04:01:00 | INFO  | Flavor SCS-2V-8 created 2026-03-26 04:01:04.140453 | orchestrator | 2026-03-26 04:01:00 | INFO  | Flavor SCS-2V-8-20 created 2026-03-26 04:01:04.140477 | orchestrator | 2026-03-26 04:01:01 | INFO  | Flavor SCS-2V-16 created 2026-03-26 04:01:04.140501 | orchestrator | 2026-03-26 04:01:01 | INFO  | Flavor SCS-2V-16-50 created 2026-03-26 04:01:04.140524 | orchestrator | 2026-03-26 04:01:01 | INFO  | Flavor SCS-4V-8 created 2026-03-26 04:01:04.140546 | orchestrator | 2026-03-26 04:01:01 | INFO  | Flavor SCS-4V-8-20 created 2026-03-26 04:01:04.140565 | orchestrator | 2026-03-26 04:01:01 | INFO  | Flavor SCS-4V-16 created 2026-03-26 04:01:04.140583 | orchestrator | 2026-03-26 04:01:01 | INFO  | Flavor SCS-4V-16-50 created 2026-03-26 04:01:04.140606 | orchestrator | 2026-03-26 04:01:02 | INFO  | Flavor SCS-4V-32 created 2026-03-26 04:01:04.140667 | orchestrator | 2026-03-26 04:01:02 | INFO  | Flavor SCS-4V-32-100 created 2026-03-26 04:01:04.140692 | orchestrator | 2026-03-26 04:01:02 | INFO  | Flavor SCS-8V-16 created 2026-03-26 04:01:04.140716 | orchestrator | 2026-03-26 04:01:02 | INFO  | Flavor SCS-8V-16-50 created 2026-03-26 04:01:04.140808 | orchestrator | 2026-03-26 04:01:02 | INFO  | Flavor SCS-8V-32 created 2026-03-26 04:01:04.140828 | orchestrator | 2026-03-26 04:01:02 | INFO  | Flavor SCS-8V-32-100 created 2026-03-26 04:01:04.140848 | orchestrator | 2026-03-26 04:01:03 | INFO  | Flavor SCS-16V-32 created 2026-03-26 04:01:04.140867 | orchestrator | 2026-03-26 04:01:03 | INFO  | Flavor SCS-16V-32-100 created 2026-03-26 04:01:04.140888 | orchestrator | 2026-03-26 04:01:03 | INFO  | Flavor SCS-2V-4-20s created 2026-03-26 04:01:04.140901 | orchestrator | 2026-03-26 04:01:03 | INFO  | Flavor SCS-4V-8-50s created 2026-03-26 04:01:04.140912 | orchestrator | 2026-03-26 04:01:03 | INFO  | Flavor SCS-8V-32-100s created 2026-03-26 04:01:06.734503 | orchestrator | 2026-03-26 04:01:06 | INFO  | Trying to run play bootstrap-basic in environment openstack 2026-03-26 04:01:16.888989 | orchestrator | 2026-03-26 04:01:16 | INFO  | Task 23a49ab9-b06a-451c-9053-ebcf8ce10483 (bootstrap-basic) was prepared for execution. 2026-03-26 04:01:16.889118 | orchestrator | 2026-03-26 04:01:16 | INFO  | It takes a moment until task 23a49ab9-b06a-451c-9053-ebcf8ce10483 (bootstrap-basic) has been started and output is visible here. 2026-03-26 04:02:02.744246 | orchestrator | 2026-03-26 04:02:02.744372 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2026-03-26 04:02:02.744396 | orchestrator | 2026-03-26 04:02:02.744415 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-26 04:02:02.744435 | orchestrator | Thursday 26 March 2026 04:01:22 +0000 (0:00:00.092) 0:00:00.092 ******** 2026-03-26 04:02:02.744471 | orchestrator | ok: [localhost] 2026-03-26 04:02:02.744493 | orchestrator | 2026-03-26 04:02:02.744509 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2026-03-26 04:02:02.744521 | orchestrator | Thursday 26 March 2026 04:01:24 +0000 (0:00:02.112) 0:00:02.205 ******** 2026-03-26 04:02:02.744532 | orchestrator | ok: [localhost] 2026-03-26 04:02:02.744543 | orchestrator | 2026-03-26 04:02:02.744554 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2026-03-26 04:02:02.744565 | orchestrator | Thursday 26 March 2026 04:01:32 +0000 (0:00:07.687) 0:00:09.892 ******** 2026-03-26 04:02:02.744577 | orchestrator | changed: [localhost] 2026-03-26 04:02:02.744588 | orchestrator | 2026-03-26 04:02:02.744599 | orchestrator | TASK [Create public network] *************************************************** 2026-03-26 04:02:02.744611 | orchestrator | Thursday 26 March 2026 04:01:38 +0000 (0:00:06.342) 0:00:16.234 ******** 2026-03-26 04:02:02.744622 | orchestrator | changed: [localhost] 2026-03-26 04:02:02.744633 | orchestrator | 2026-03-26 04:02:02.744644 | orchestrator | TASK [Set public network to default] ******************************************* 2026-03-26 04:02:02.744655 | orchestrator | Thursday 26 March 2026 04:01:44 +0000 (0:00:05.625) 0:00:21.860 ******** 2026-03-26 04:02:02.744670 | orchestrator | changed: [localhost] 2026-03-26 04:02:02.744681 | orchestrator | 2026-03-26 04:02:02.744693 | orchestrator | TASK [Create public subnet] **************************************************** 2026-03-26 04:02:02.744704 | orchestrator | Thursday 26 March 2026 04:01:50 +0000 (0:00:06.518) 0:00:28.378 ******** 2026-03-26 04:02:02.744715 | orchestrator | changed: [localhost] 2026-03-26 04:02:02.744726 | orchestrator | 2026-03-26 04:02:02.744785 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2026-03-26 04:02:02.744799 | orchestrator | Thursday 26 March 2026 04:01:54 +0000 (0:00:04.320) 0:00:32.699 ******** 2026-03-26 04:02:02.744812 | orchestrator | changed: [localhost] 2026-03-26 04:02:02.744824 | orchestrator | 2026-03-26 04:02:02.744838 | orchestrator | TASK [Create manager role] ***************************************************** 2026-03-26 04:02:02.744861 | orchestrator | Thursday 26 March 2026 04:01:58 +0000 (0:00:03.967) 0:00:36.667 ******** 2026-03-26 04:02:02.744875 | orchestrator | ok: [localhost] 2026-03-26 04:02:02.744888 | orchestrator | 2026-03-26 04:02:02.744901 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-26 04:02:02.744914 | orchestrator | localhost : ok=8  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-26 04:02:02.744929 | orchestrator | 2026-03-26 04:02:02.744941 | orchestrator | 2026-03-26 04:02:02.744954 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-26 04:02:02.744967 | orchestrator | Thursday 26 March 2026 04:02:02 +0000 (0:00:03.540) 0:00:40.207 ******** 2026-03-26 04:02:02.744981 | orchestrator | =============================================================================== 2026-03-26 04:02:02.744995 | orchestrator | Get volume type LUKS ---------------------------------------------------- 7.69s 2026-03-26 04:02:02.745008 | orchestrator | Set public network to default ------------------------------------------- 6.52s 2026-03-26 04:02:02.745021 | orchestrator | Create volume type LUKS ------------------------------------------------- 6.34s 2026-03-26 04:02:02.745034 | orchestrator | Create public network --------------------------------------------------- 5.63s 2026-03-26 04:02:02.745070 | orchestrator | Create public subnet ---------------------------------------------------- 4.32s 2026-03-26 04:02:02.745084 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 3.97s 2026-03-26 04:02:02.745097 | orchestrator | Create manager role ----------------------------------------------------- 3.54s 2026-03-26 04:02:02.745110 | orchestrator | Gathering Facts --------------------------------------------------------- 2.11s 2026-03-26 04:02:05.119106 | orchestrator | 2026-03-26 04:02:05 | INFO  | It takes a moment until task b74b2da2-3ef3-4167-9a51-4ffcf0b7b791 (image-manager) has been started and output is visible here. 2026-03-26 04:02:48.496142 | orchestrator | 2026-03-26 04:02:07 | INFO  | Processing image 'Cirros 0.6.2' 2026-03-26 04:02:48.496262 | orchestrator | 2026-03-26 04:02:08 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2026-03-26 04:02:48.496279 | orchestrator | 2026-03-26 04:02:08 | INFO  | Importing image Cirros 0.6.2 2026-03-26 04:02:48.496292 | orchestrator | 2026-03-26 04:02:08 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-03-26 04:02:48.496305 | orchestrator | 2026-03-26 04:02:10 | INFO  | Waiting for image to leave queued state... 2026-03-26 04:02:48.496317 | orchestrator | 2026-03-26 04:02:12 | INFO  | Waiting for import to complete... 2026-03-26 04:02:48.496328 | orchestrator | 2026-03-26 04:02:22 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2026-03-26 04:02:48.496340 | orchestrator | 2026-03-26 04:02:23 | INFO  | Checking parameters of 'Cirros 0.6.2' 2026-03-26 04:02:48.496351 | orchestrator | 2026-03-26 04:02:23 | INFO  | Setting internal_version = 0.6.2 2026-03-26 04:02:48.496363 | orchestrator | 2026-03-26 04:02:23 | INFO  | Setting image_original_user = cirros 2026-03-26 04:02:48.496374 | orchestrator | 2026-03-26 04:02:23 | INFO  | Adding tag os:cirros 2026-03-26 04:02:48.496385 | orchestrator | 2026-03-26 04:02:23 | INFO  | Setting property architecture: x86_64 2026-03-26 04:02:48.496396 | orchestrator | 2026-03-26 04:02:23 | INFO  | Setting property hw_disk_bus: scsi 2026-03-26 04:02:48.496407 | orchestrator | 2026-03-26 04:02:23 | INFO  | Setting property hw_rng_model: virtio 2026-03-26 04:02:48.496418 | orchestrator | 2026-03-26 04:02:24 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-03-26 04:02:48.496429 | orchestrator | 2026-03-26 04:02:24 | INFO  | Setting property hw_watchdog_action: reset 2026-03-26 04:02:48.496440 | orchestrator | 2026-03-26 04:02:24 | INFO  | Setting property hypervisor_type: qemu 2026-03-26 04:02:48.496452 | orchestrator | 2026-03-26 04:02:25 | INFO  | Setting property os_distro: cirros 2026-03-26 04:02:48.496463 | orchestrator | 2026-03-26 04:02:25 | INFO  | Setting property os_purpose: minimal 2026-03-26 04:02:48.496474 | orchestrator | 2026-03-26 04:02:25 | INFO  | Setting property replace_frequency: never 2026-03-26 04:02:48.496485 | orchestrator | 2026-03-26 04:02:25 | INFO  | Setting property uuid_validity: none 2026-03-26 04:02:48.496495 | orchestrator | 2026-03-26 04:02:26 | INFO  | Setting property provided_until: none 2026-03-26 04:02:48.496506 | orchestrator | 2026-03-26 04:02:26 | INFO  | Setting property image_description: Cirros 2026-03-26 04:02:48.496518 | orchestrator | 2026-03-26 04:02:26 | INFO  | Setting property image_name: Cirros 2026-03-26 04:02:48.496529 | orchestrator | 2026-03-26 04:02:26 | INFO  | Setting property internal_version: 0.6.2 2026-03-26 04:02:48.496540 | orchestrator | 2026-03-26 04:02:27 | INFO  | Setting property image_original_user: cirros 2026-03-26 04:02:48.496571 | orchestrator | 2026-03-26 04:02:27 | INFO  | Setting property os_version: 0.6.2 2026-03-26 04:02:48.496592 | orchestrator | 2026-03-26 04:02:27 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-03-26 04:02:48.496604 | orchestrator | 2026-03-26 04:02:28 | INFO  | Setting property image_build_date: 2023-05-30 2026-03-26 04:02:48.496615 | orchestrator | 2026-03-26 04:02:28 | INFO  | Checking status of 'Cirros 0.6.2' 2026-03-26 04:02:48.496626 | orchestrator | 2026-03-26 04:02:28 | INFO  | Checking visibility of 'Cirros 0.6.2' 2026-03-26 04:02:48.496637 | orchestrator | 2026-03-26 04:02:28 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2026-03-26 04:02:48.496648 | orchestrator | 2026-03-26 04:02:28 | INFO  | Processing image 'Cirros 0.6.3' 2026-03-26 04:02:48.496664 | orchestrator | 2026-03-26 04:02:28 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2026-03-26 04:02:48.496678 | orchestrator | 2026-03-26 04:02:28 | INFO  | Importing image Cirros 0.6.3 2026-03-26 04:02:48.496691 | orchestrator | 2026-03-26 04:02:28 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-03-26 04:02:48.496705 | orchestrator | 2026-03-26 04:02:29 | INFO  | Waiting for image to leave queued state... 2026-03-26 04:02:48.496718 | orchestrator | 2026-03-26 04:02:31 | INFO  | Waiting for import to complete... 2026-03-26 04:02:48.496789 | orchestrator | 2026-03-26 04:02:41 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2026-03-26 04:02:48.496805 | orchestrator | 2026-03-26 04:02:42 | INFO  | Checking parameters of 'Cirros 0.6.3' 2026-03-26 04:02:48.496818 | orchestrator | 2026-03-26 04:02:42 | INFO  | Setting internal_version = 0.6.3 2026-03-26 04:02:48.496830 | orchestrator | 2026-03-26 04:02:42 | INFO  | Setting image_original_user = cirros 2026-03-26 04:02:48.496843 | orchestrator | 2026-03-26 04:02:42 | INFO  | Adding tag os:cirros 2026-03-26 04:02:48.496856 | orchestrator | 2026-03-26 04:02:42 | INFO  | Setting property architecture: x86_64 2026-03-26 04:02:48.496868 | orchestrator | 2026-03-26 04:02:42 | INFO  | Setting property hw_disk_bus: scsi 2026-03-26 04:02:48.496881 | orchestrator | 2026-03-26 04:02:42 | INFO  | Setting property hw_rng_model: virtio 2026-03-26 04:02:48.496893 | orchestrator | 2026-03-26 04:02:43 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-03-26 04:02:48.496906 | orchestrator | 2026-03-26 04:02:43 | INFO  | Setting property hw_watchdog_action: reset 2026-03-26 04:02:48.496919 | orchestrator | 2026-03-26 04:02:43 | INFO  | Setting property hypervisor_type: qemu 2026-03-26 04:02:48.496932 | orchestrator | 2026-03-26 04:02:43 | INFO  | Setting property os_distro: cirros 2026-03-26 04:02:48.496945 | orchestrator | 2026-03-26 04:02:44 | INFO  | Setting property os_purpose: minimal 2026-03-26 04:02:48.496958 | orchestrator | 2026-03-26 04:02:44 | INFO  | Setting property replace_frequency: never 2026-03-26 04:02:48.496972 | orchestrator | 2026-03-26 04:02:44 | INFO  | Setting property uuid_validity: none 2026-03-26 04:02:48.496984 | orchestrator | 2026-03-26 04:02:44 | INFO  | Setting property provided_until: none 2026-03-26 04:02:48.496998 | orchestrator | 2026-03-26 04:02:45 | INFO  | Setting property image_description: Cirros 2026-03-26 04:02:48.497011 | orchestrator | 2026-03-26 04:02:45 | INFO  | Setting property image_name: Cirros 2026-03-26 04:02:48.497023 | orchestrator | 2026-03-26 04:02:45 | INFO  | Setting property internal_version: 0.6.3 2026-03-26 04:02:48.497043 | orchestrator | 2026-03-26 04:02:46 | INFO  | Setting property image_original_user: cirros 2026-03-26 04:02:48.497054 | orchestrator | 2026-03-26 04:02:46 | INFO  | Setting property os_version: 0.6.3 2026-03-26 04:02:48.497065 | orchestrator | 2026-03-26 04:02:46 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-03-26 04:02:48.497076 | orchestrator | 2026-03-26 04:02:47 | INFO  | Setting property image_build_date: 2024-09-26 2026-03-26 04:02:48.497087 | orchestrator | 2026-03-26 04:02:47 | INFO  | Checking status of 'Cirros 0.6.3' 2026-03-26 04:02:48.497097 | orchestrator | 2026-03-26 04:02:47 | INFO  | Checking visibility of 'Cirros 0.6.3' 2026-03-26 04:02:48.497108 | orchestrator | 2026-03-26 04:02:47 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2026-03-26 04:02:48.809695 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2026-03-26 04:02:51.153403 | orchestrator | 2026-03-26 04:02:51 | INFO  | date: 2026-03-26 2026-03-26 04:02:51.153502 | orchestrator | 2026-03-26 04:02:51 | INFO  | image: octavia-amphora-haproxy-2024.2.20260326.qcow2 2026-03-26 04:02:51.153539 | orchestrator | 2026-03-26 04:02:51 | INFO  | url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260326.qcow2 2026-03-26 04:02:51.153555 | orchestrator | 2026-03-26 04:02:51 | INFO  | checksum_url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260326.qcow2.CHECKSUM 2026-03-26 04:02:51.310524 | orchestrator | 2026-03-26 04:02:51 | INFO  | checksum: 95322c96815879973a320f11cf6c9ad6237f8791183c852e0c8319e08839b1ac 2026-03-26 04:02:51.378352 | orchestrator | 2026-03-26 04:02:51 | INFO  | It takes a moment until task 025d7cac-e919-43a4-aaa9-75a2a23da910 (image-manager) has been started and output is visible here. 2026-03-26 04:04:04.987146 | orchestrator | 2026-03-26 04:02:53 | INFO  | Processing image 'OpenStack Octavia Amphora 2026-03-26' 2026-03-26 04:04:04.987265 | orchestrator | 2026-03-26 04:02:53 | INFO  | Tested URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260326.qcow2: 200 2026-03-26 04:04:04.987284 | orchestrator | 2026-03-26 04:02:53 | INFO  | Importing image OpenStack Octavia Amphora 2026-03-26 2026-03-26 04:04:04.987297 | orchestrator | 2026-03-26 04:02:53 | INFO  | Importing from URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260326.qcow2 2026-03-26 04:04:04.987310 | orchestrator | 2026-03-26 04:02:55 | INFO  | Waiting for image to leave queued state... 2026-03-26 04:04:04.987321 | orchestrator | 2026-03-26 04:02:57 | INFO  | Waiting for import to complete... 2026-03-26 04:04:04.987332 | orchestrator | 2026-03-26 04:03:07 | INFO  | Waiting for import to complete... 2026-03-26 04:04:04.987343 | orchestrator | 2026-03-26 04:03:17 | INFO  | Waiting for import to complete... 2026-03-26 04:04:04.987354 | orchestrator | 2026-03-26 04:03:27 | INFO  | Waiting for import to complete... 2026-03-26 04:04:04.987367 | orchestrator | 2026-03-26 04:03:38 | INFO  | Waiting for import to complete... 2026-03-26 04:04:04.987379 | orchestrator | 2026-03-26 04:03:48 | INFO  | Waiting for import to complete... 2026-03-26 04:04:04.987391 | orchestrator | 2026-03-26 04:03:58 | INFO  | Import of 'OpenStack Octavia Amphora 2026-03-26' successfully completed, reloading images 2026-03-26 04:04:04.987402 | orchestrator | 2026-03-26 04:03:58 | INFO  | Checking parameters of 'OpenStack Octavia Amphora 2026-03-26' 2026-03-26 04:04:04.987437 | orchestrator | 2026-03-26 04:03:58 | INFO  | Setting internal_version = 2026-03-26 2026-03-26 04:04:04.987448 | orchestrator | 2026-03-26 04:03:58 | INFO  | Setting image_original_user = ubuntu 2026-03-26 04:04:04.987460 | orchestrator | 2026-03-26 04:03:58 | INFO  | Adding tag amphora 2026-03-26 04:04:04.987471 | orchestrator | 2026-03-26 04:03:59 | INFO  | Adding tag os:ubuntu 2026-03-26 04:04:04.987482 | orchestrator | 2026-03-26 04:03:59 | INFO  | Setting property architecture: x86_64 2026-03-26 04:04:04.987492 | orchestrator | 2026-03-26 04:03:59 | INFO  | Setting property hw_disk_bus: scsi 2026-03-26 04:04:04.987503 | orchestrator | 2026-03-26 04:03:59 | INFO  | Setting property hw_rng_model: virtio 2026-03-26 04:04:04.987514 | orchestrator | 2026-03-26 04:04:00 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-03-26 04:04:04.987525 | orchestrator | 2026-03-26 04:04:00 | INFO  | Setting property hw_watchdog_action: reset 2026-03-26 04:04:04.987536 | orchestrator | 2026-03-26 04:04:00 | INFO  | Setting property hypervisor_type: qemu 2026-03-26 04:04:04.987547 | orchestrator | 2026-03-26 04:04:00 | INFO  | Setting property os_distro: ubuntu 2026-03-26 04:04:04.987557 | orchestrator | 2026-03-26 04:04:01 | INFO  | Setting property replace_frequency: quarterly 2026-03-26 04:04:04.987568 | orchestrator | 2026-03-26 04:04:01 | INFO  | Setting property uuid_validity: last-1 2026-03-26 04:04:04.987579 | orchestrator | 2026-03-26 04:04:02 | INFO  | Setting property provided_until: none 2026-03-26 04:04:04.987590 | orchestrator | 2026-03-26 04:04:02 | INFO  | Setting property os_purpose: network 2026-03-26 04:04:04.987615 | orchestrator | 2026-03-26 04:04:02 | INFO  | Setting property image_description: OpenStack Octavia Amphora 2026-03-26 04:04:04.987627 | orchestrator | 2026-03-26 04:04:02 | INFO  | Setting property image_name: OpenStack Octavia Amphora 2026-03-26 04:04:04.987638 | orchestrator | 2026-03-26 04:04:03 | INFO  | Setting property internal_version: 2026-03-26 2026-03-26 04:04:04.987649 | orchestrator | 2026-03-26 04:04:03 | INFO  | Setting property image_original_user: ubuntu 2026-03-26 04:04:04.987660 | orchestrator | 2026-03-26 04:04:03 | INFO  | Setting property os_version: 2026-03-26 2026-03-26 04:04:04.987671 | orchestrator | 2026-03-26 04:04:03 | INFO  | Setting property image_source: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260326.qcow2 2026-03-26 04:04:04.987682 | orchestrator | 2026-03-26 04:04:04 | INFO  | Setting property image_build_date: 2026-03-26 2026-03-26 04:04:04.987695 | orchestrator | 2026-03-26 04:04:04 | INFO  | Checking status of 'OpenStack Octavia Amphora 2026-03-26' 2026-03-26 04:04:04.987708 | orchestrator | 2026-03-26 04:04:04 | INFO  | Checking visibility of 'OpenStack Octavia Amphora 2026-03-26' 2026-03-26 04:04:04.987767 | orchestrator | 2026-03-26 04:04:04 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2026-03-26 04:04:04.987782 | orchestrator | 2026-03-26 04:04:04 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2026-03-26 04:04:04.987796 | orchestrator | 2026-03-26 04:04:04 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2026-03-26 04:04:04.987809 | orchestrator | 2026-03-26 04:04:04 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2026-03-26 04:04:05.373589 | orchestrator | ok: Runtime: 0:03:12.839139 2026-03-26 04:04:05.384064 | 2026-03-26 04:04:05.384271 | TASK [Run checks] 2026-03-26 04:04:06.126731 | orchestrator | + set -e 2026-03-26 04:04:06.126960 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-26 04:04:06.126986 | orchestrator | ++ export INTERACTIVE=false 2026-03-26 04:04:06.127008 | orchestrator | ++ INTERACTIVE=false 2026-03-26 04:04:06.127022 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-26 04:04:06.127035 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-26 04:04:06.127050 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-03-26 04:04:06.127916 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-03-26 04:04:06.134557 | orchestrator | 2026-03-26 04:04:06.134661 | orchestrator | # CHECK 2026-03-26 04:04:06.134678 | orchestrator | 2026-03-26 04:04:06.134691 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-26 04:04:06.134709 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-26 04:04:06.134721 | orchestrator | + echo 2026-03-26 04:04:06.134759 | orchestrator | + echo '# CHECK' 2026-03-26 04:04:06.134772 | orchestrator | + echo 2026-03-26 04:04:06.134787 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-03-26 04:04:06.135425 | orchestrator | ++ semver 9.5.0 5.0.0 2026-03-26 04:04:06.193699 | orchestrator | 2026-03-26 04:04:06.193834 | orchestrator | ## Containers @ testbed-manager 2026-03-26 04:04:06.193851 | orchestrator | 2026-03-26 04:04:06.193865 | orchestrator | + [[ 1 -eq -1 ]] 2026-03-26 04:04:06.193877 | orchestrator | + echo 2026-03-26 04:04:06.193889 | orchestrator | + echo '## Containers @ testbed-manager' 2026-03-26 04:04:06.193901 | orchestrator | + echo 2026-03-26 04:04:06.193913 | orchestrator | + osism container testbed-manager ps 2026-03-26 04:04:08.139188 | orchestrator | 2026-03-26 04:04:08 | INFO  | Creating empty known_hosts file: /share/known_hosts 2026-03-26 04:04:08.495130 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-03-26 04:04:08.495258 | orchestrator | a6c0ff0cf266 registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_blackbox_exporter 2026-03-26 04:04:08.495285 | orchestrator | 9950518a1d95 registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_alertmanager 2026-03-26 04:04:08.495306 | orchestrator | 2d772ec2381d registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_cadvisor 2026-03-26 04:04:08.495319 | orchestrator | 3db09ddd1147 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_node_exporter 2026-03-26 04:04:08.495330 | orchestrator | 9171c7f54c01 registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_server 2026-03-26 04:04:08.495347 | orchestrator | 42b99ccdba78 registry.osism.tech/osism/cephclient:18.2.7 "/usr/bin/dumb-init …" About an hour ago Up 59 minutes cephclient 2026-03-26 04:04:08.495359 | orchestrator | 14fa8b69c69f registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-03-26 04:04:08.495371 | orchestrator | e7d025959a4d registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-03-26 04:04:08.495408 | orchestrator | 76b296741eb4 registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-03-26 04:04:08.495421 | orchestrator | 7e42ae5d1d9f registry.osism.tech/osism/openstackclient:2024.2 "/usr/bin/dumb-init …" 2 hours ago Up 2 hours openstackclient 2026-03-26 04:04:08.495432 | orchestrator | 9c516885849a phpmyadmin/phpmyadmin:5.2 "/docker-entrypoint.…" 2 hours ago Up 2 hours (healthy) 80/tcp phpmyadmin 2026-03-26 04:04:08.495444 | orchestrator | 218b44791923 registry.osism.tech/osism/homer:v25.10.1 "/bin/sh /entrypoint…" 2 hours ago Up 2 hours (healthy) 8080/tcp homer 2026-03-26 04:04:08.495456 | orchestrator | 26eeba89e7d3 registry.osism.tech/osism/cgit:1.2.3 "httpd-foreground" 2 hours ago Up 2 hours 80/tcp cgit 2026-03-26 04:04:08.495468 | orchestrator | 01f240bfb490 registry.osism.tech/dockerhub/ubuntu/squid:6.1-23.10_beta "entrypoint.sh -f /e…" 2 hours ago Up 2 hours (healthy) 192.168.16.5:3128->3128/tcp squid 2026-03-26 04:04:08.495504 | orchestrator | 46bc210ef4f1 registry.osism.tech/osism/inventory-reconciler:0.20251130.0 "/sbin/tini -- /entr…" 2 hours ago Up 2 hours (healthy) manager-inventory_reconciler-1 2026-03-26 04:04:08.495517 | orchestrator | e5947914229b registry.osism.tech/osism/ceph-ansible:0.20251130.0 "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) ceph-ansible 2026-03-26 04:04:08.495529 | orchestrator | 80f4f4ccb433 registry.osism.tech/osism/osism-ansible:0.20251130.0 "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) osism-ansible 2026-03-26 04:04:08.495540 | orchestrator | 4cce80abddf5 registry.osism.tech/osism/osism-kubernetes:0.20251130.0 "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) osism-kubernetes 2026-03-26 04:04:08.495552 | orchestrator | 383c6d4109e1 registry.osism.tech/osism/kolla-ansible:0.20251130.0 "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) kolla-ansible 2026-03-26 04:04:08.495563 | orchestrator | 73c5df6619ee registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" 2 hours ago Up 2 hours (healthy) 8000/tcp manager-ara-server-1 2026-03-26 04:04:08.495574 | orchestrator | a1385669c48e registry.osism.tech/osism/osism-frontend:0.20251130.1 "docker-entrypoint.s…" 2 hours ago Up 2 hours 192.168.16.5:3000->3000/tcp osism-frontend 2026-03-26 04:04:08.495586 | orchestrator | aa27c19ce640 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-listener-1 2026-03-26 04:04:08.495605 | orchestrator | 319e06da1161 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-flower-1 2026-03-26 04:04:08.495617 | orchestrator | 0084ba6d7452 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-beat-1 2026-03-26 04:04:08.495628 | orchestrator | 61ee0d48fd3c registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) 192.168.16.5:8000->8000/tcp manager-api-1 2026-03-26 04:04:08.495640 | orchestrator | c5005444125f registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- sleep…" 2 hours ago Up 2 hours (healthy) osismclient 2026-03-26 04:04:08.495651 | orchestrator | fa875137e67a registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" 2 hours ago Up 2 hours (healthy) 6379/tcp manager-redis-1 2026-03-26 04:04:08.495662 | orchestrator | 4bd574e40d14 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-openstack-1 2026-03-26 04:04:08.495679 | orchestrator | ee921f5b5733 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" 2 hours ago Up 2 hours (healthy) 3306/tcp manager-mariadb-1 2026-03-26 04:04:08.495691 | orchestrator | 9ba1921448df registry.osism.tech/dockerhub/library/traefik:v3.5.0 "/entrypoint.sh trae…" 2 hours ago Up 2 hours (healthy) 192.168.16.5:80->80/tcp, 192.168.16.5:443->443/tcp, 192.168.16.5:8122->8080/tcp traefik 2026-03-26 04:04:08.803823 | orchestrator | 2026-03-26 04:04:08.803929 | orchestrator | ## Images @ testbed-manager 2026-03-26 04:04:08.803946 | orchestrator | 2026-03-26 04:04:08.803959 | orchestrator | + echo 2026-03-26 04:04:08.803971 | orchestrator | + echo '## Images @ testbed-manager' 2026-03-26 04:04:08.803983 | orchestrator | + echo 2026-03-26 04:04:08.803999 | orchestrator | + osism container testbed-manager images 2026-03-26 04:04:11.103175 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-03-26 04:04:11.103293 | orchestrator | registry.osism.tech/osism/openstackclient 2024.2 1806a5037062 24 hours ago 239MB 2026-03-26 04:04:11.103310 | orchestrator | registry.osism.tech/dockerhub/library/redis 7.4.7-alpine e08bd8d5a677 8 weeks ago 41.4MB 2026-03-26 04:04:11.103321 | orchestrator | registry.osism.tech/osism/homer v25.10.1 ea34b371c716 3 months ago 11.5MB 2026-03-26 04:04:11.103332 | orchestrator | registry.osism.tech/osism/kolla-ansible 0.20251130.0 0f140ec71e5f 3 months ago 608MB 2026-03-26 04:04:11.103347 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 3 months ago 669MB 2026-03-26 04:04:11.103358 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 3 months ago 265MB 2026-03-26 04:04:11.103369 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 3 months ago 578MB 2026-03-26 04:04:11.103380 | orchestrator | registry.osism.tech/kolla/release/prometheus-blackbox-exporter 0.25.0.20251130 7bbb4f6f4831 3 months ago 308MB 2026-03-26 04:04:11.103391 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 3 months ago 357MB 2026-03-26 04:04:11.103434 | orchestrator | registry.osism.tech/kolla/release/prometheus-alertmanager 0.28.0.20251130 ba994ea4acda 3 months ago 404MB 2026-03-26 04:04:11.103446 | orchestrator | registry.osism.tech/kolla/release/prometheus-v2-server 2.55.1.20251130 56b43d5c716a 3 months ago 839MB 2026-03-26 04:04:11.103457 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 3 months ago 305MB 2026-03-26 04:04:11.103468 | orchestrator | registry.osism.tech/osism/inventory-reconciler 0.20251130.0 1bfc1dadeee1 3 months ago 330MB 2026-03-26 04:04:11.103479 | orchestrator | registry.osism.tech/osism/osism-ansible 0.20251130.0 42988b2d229c 3 months ago 613MB 2026-03-26 04:04:11.103490 | orchestrator | registry.osism.tech/osism/ceph-ansible 0.20251130.0 a212d8ca4a50 3 months ago 560MB 2026-03-26 04:04:11.103502 | orchestrator | registry.osism.tech/osism/osism-kubernetes 0.20251130.0 9beff03cb77b 3 months ago 1.23GB 2026-03-26 04:04:11.103512 | orchestrator | registry.osism.tech/osism/osism 0.20251130.1 95213af683ec 3 months ago 383MB 2026-03-26 04:04:11.103523 | orchestrator | registry.osism.tech/osism/osism-frontend 0.20251130.1 2cb6e7609620 3 months ago 238MB 2026-03-26 04:04:11.103534 | orchestrator | registry.osism.tech/dockerhub/library/mariadb 11.8.4 70745dd8f1d0 4 months ago 334MB 2026-03-26 04:04:11.103545 | orchestrator | phpmyadmin/phpmyadmin 5.2 e66b1f5a8c58 5 months ago 742MB 2026-03-26 04:04:11.103556 | orchestrator | registry.osism.tech/osism/ara-server 1.7.3 d1b687333f2f 7 months ago 275MB 2026-03-26 04:04:11.103567 | orchestrator | registry.osism.tech/dockerhub/library/traefik v3.5.0 11cc59587f6a 8 months ago 226MB 2026-03-26 04:04:11.103578 | orchestrator | registry.osism.tech/osism/cephclient 18.2.7 ae977aa79826 10 months ago 453MB 2026-03-26 04:04:11.103590 | orchestrator | registry.osism.tech/dockerhub/ubuntu/squid 6.1-23.10_beta 34b6bbbcf74b 21 months ago 146MB 2026-03-26 04:04:11.103601 | orchestrator | registry.osism.tech/osism/cgit 1.2.3 16e7285642b1 2 years ago 545MB 2026-03-26 04:04:11.417374 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-03-26 04:04:11.417826 | orchestrator | ++ semver 9.5.0 5.0.0 2026-03-26 04:04:11.482541 | orchestrator | 2026-03-26 04:04:11.482640 | orchestrator | ## Containers @ testbed-node-0 2026-03-26 04:04:11.482654 | orchestrator | 2026-03-26 04:04:11.482664 | orchestrator | + [[ 1 -eq -1 ]] 2026-03-26 04:04:11.482675 | orchestrator | + echo 2026-03-26 04:04:11.482686 | orchestrator | + echo '## Containers @ testbed-node-0' 2026-03-26 04:04:11.482697 | orchestrator | + echo 2026-03-26 04:04:11.482707 | orchestrator | + osism container testbed-node-0 ps 2026-03-26 04:04:13.977398 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-03-26 04:04:13.977506 | orchestrator | 10e685d017e5 registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_conductor 2026-03-26 04:04:13.977528 | orchestrator | 1e8d61047d06 registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_api 2026-03-26 04:04:13.977551 | orchestrator | db6b2a95a9bc registry.osism.tech/kolla/release/grafana:12.3.0.20251130 "dumb-init --single-…" 7 minutes ago Up 7 minutes grafana 2026-03-26 04:04:13.977571 | orchestrator | 6542f78de3af registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_elasticsearch_exporter 2026-03-26 04:04:13.977618 | orchestrator | 467c18e96c3a registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_cadvisor 2026-03-26 04:04:13.977641 | orchestrator | 137fab20c985 registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_memcached_exporter 2026-03-26 04:04:13.977670 | orchestrator | 937717e9414d registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_mysqld_exporter 2026-03-26 04:04:13.977693 | orchestrator | 7eb22a81cd5e registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_node_exporter 2026-03-26 04:04:13.977713 | orchestrator | f3fd6a1d9bdd registry.osism.tech/kolla/release/manila-share:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_share 2026-03-26 04:04:13.977761 | orchestrator | 7f043dc613c0 registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_scheduler 2026-03-26 04:04:13.977774 | orchestrator | 7d2ea26d8a03 registry.osism.tech/kolla/release/manila-data:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_data 2026-03-26 04:04:13.977785 | orchestrator | b1dd7de43631 registry.osism.tech/kolla/release/manila-api:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_api 2026-03-26 04:04:13.977797 | orchestrator | 122590345af5 registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_notifier 2026-03-26 04:04:13.977808 | orchestrator | c6af9a1009fc registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_listener 2026-03-26 04:04:13.977819 | orchestrator | 3a51fd5dcb26 registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_evaluator 2026-03-26 04:04:13.977830 | orchestrator | 212c65b505ce registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_api 2026-03-26 04:04:13.977848 | orchestrator | 428298fcfb6e registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes ceilometer_central 2026-03-26 04:04:13.977860 | orchestrator | 6fff7b3529d3 registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) ceilometer_notification 2026-03-26 04:04:13.977871 | orchestrator | e0fc6d6cd546 registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_worker 2026-03-26 04:04:13.977906 | orchestrator | 658463a5ab16 registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_housekeeping 2026-03-26 04:04:13.977918 | orchestrator | 1015fd0d4be5 registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_health_manager 2026-03-26 04:04:13.977930 | orchestrator | 938dfb4b5809 registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes octavia_driver_agent 2026-03-26 04:04:13.977952 | orchestrator | 7b61d18683f1 registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_api 2026-03-26 04:04:13.977963 | orchestrator | 84d11367df1c registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_worker 2026-03-26 04:04:13.977974 | orchestrator | f53fd21463f1 registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_mdns 2026-03-26 04:04:13.977990 | orchestrator | 6d3d7a997c3a registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_producer 2026-03-26 04:04:13.978002 | orchestrator | 7afbf1ab3e1d registry.osism.tech/kolla/release/designate-central:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_central 2026-03-26 04:04:13.978013 | orchestrator | d021acb82cd1 registry.osism.tech/kolla/release/designate-api:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_api 2026-03-26 04:04:13.978063 | orchestrator | bbf56f666257 registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_backend_bind9 2026-03-26 04:04:13.978074 | orchestrator | 02345889a066 registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_worker 2026-03-26 04:04:13.978085 | orchestrator | 6690561e229b registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_keystone_listener 2026-03-26 04:04:13.978128 | orchestrator | f3e2f038b0cd registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) barbican_api 2026-03-26 04:04:13.978140 | orchestrator | 7eacd3eb081d registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_backup 2026-03-26 04:04:13.978151 | orchestrator | 33e9db06bbef registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) cinder_volume 2026-03-26 04:04:13.978162 | orchestrator | 501c5dc0acdd registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) cinder_scheduler 2026-03-26 04:04:13.978173 | orchestrator | 15d969d91992 registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) cinder_api 2026-03-26 04:04:13.978184 | orchestrator | b158cf01e40b registry.osism.tech/kolla/release/glance-api:29.0.1.20251130 "dumb-init --single-…" 34 minutes ago Up 34 minutes (healthy) glance_api 2026-03-26 04:04:13.978195 | orchestrator | ee404b650bd7 registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130 "dumb-init --single-…" 37 minutes ago Up 37 minutes (healthy) skyline_console 2026-03-26 04:04:13.978212 | orchestrator | 29a44929d5a6 registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130 "dumb-init --single-…" 37 minutes ago Up 37 minutes (healthy) skyline_apiserver 2026-03-26 04:04:13.978242 | orchestrator | f16c1f62e400 registry.osism.tech/kolla/release/horizon:25.1.2.20251130 "dumb-init --single-…" 39 minutes ago Up 39 minutes (healthy) horizon 2026-03-26 04:04:13.978254 | orchestrator | 2415fb04f136 registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130 "dumb-init --single-…" 43 minutes ago Up 43 minutes (healthy) nova_novncproxy 2026-03-26 04:04:13.978266 | orchestrator | 7886f78fe202 registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130 "dumb-init --single-…" 43 minutes ago Up 43 minutes (healthy) nova_conductor 2026-03-26 04:04:13.978277 | orchestrator | 156eb6887714 registry.osism.tech/kolla/release/nova-api:30.2.1.20251130 "dumb-init --single-…" 44 minutes ago Up 44 minutes (healthy) nova_api 2026-03-26 04:04:13.978288 | orchestrator | ab5d246d0721 registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130 "dumb-init --single-…" 45 minutes ago Up 45 minutes (healthy) nova_scheduler 2026-03-26 04:04:13.978299 | orchestrator | 4ad302f098f7 registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130 "dumb-init --single-…" 50 minutes ago Up 50 minutes (healthy) neutron_server 2026-03-26 04:04:13.978310 | orchestrator | 139e866a4b61 registry.osism.tech/kolla/release/placement-api:12.0.1.20251130 "dumb-init --single-…" 53 minutes ago Up 53 minutes (healthy) placement_api 2026-03-26 04:04:13.978321 | orchestrator | 1c6ed65052b6 registry.osism.tech/kolla/release/keystone:26.0.1.20251130 "dumb-init --single-…" 56 minutes ago Up 55 minutes (healthy) keystone 2026-03-26 04:04:13.978332 | orchestrator | d393c5606814 registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130 "dumb-init --single-…" 56 minutes ago Up 56 minutes (healthy) keystone_fernet 2026-03-26 04:04:13.978343 | orchestrator | f820741ba15d registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130 "dumb-init --single-…" 56 minutes ago Up 56 minutes (healthy) keystone_ssh 2026-03-26 04:04:13.978355 | orchestrator | 198d679236a4 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 58 minutes ago Up 58 minutes ceph-mgr-testbed-node-0 2026-03-26 04:04:13.978366 | orchestrator | 48ac09de19f5 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" About an hour ago Up About an hour ceph-crash-testbed-node-0 2026-03-26 04:04:13.978382 | orchestrator | c1b85917b265 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" About an hour ago Up About an hour ceph-mon-testbed-node-0 2026-03-26 04:04:13.978393 | orchestrator | 28734fa9b704 registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_northd 2026-03-26 04:04:13.978405 | orchestrator | 83ce56dde60c registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_sb_db 2026-03-26 04:04:13.978416 | orchestrator | 5d67cc327d74 registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_nb_db 2026-03-26 04:04:13.978427 | orchestrator | fdf645f7da16 registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_controller 2026-03-26 04:04:13.978438 | orchestrator | 4e815f33dd1c registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_vswitchd 2026-03-26 04:04:13.978455 | orchestrator | 5d315a0b0023 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_db 2026-03-26 04:04:13.978466 | orchestrator | 196668baf1fa registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) rabbitmq 2026-03-26 04:04:13.978484 | orchestrator | 7d42071b582a registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130 "dumb-init -- kolla_…" About an hour ago Up About an hour (healthy) mariadb 2026-03-26 04:04:13.978495 | orchestrator | b05b43a972e2 registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis_sentinel 2026-03-26 04:04:13.978507 | orchestrator | e9731f9eb885 registry.osism.tech/kolla/release/redis:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis 2026-03-26 04:04:13.978518 | orchestrator | 9b8a2aa0aa07 registry.osism.tech/kolla/release/memcached:1.6.24.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) memcached 2026-03-26 04:04:13.978529 | orchestrator | b1cb8702fe0a registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) opensearch_dashboards 2026-03-26 04:04:13.978540 | orchestrator | 05b2989d837f registry.osism.tech/kolla/release/opensearch:2.19.4.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) opensearch 2026-03-26 04:04:13.978551 | orchestrator | bb1665a8bd8c registry.osism.tech/kolla/release/keepalived:2.2.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours keepalived 2026-03-26 04:04:13.978563 | orchestrator | 2f3893ca35e0 registry.osism.tech/kolla/release/proxysql:3.0.3.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) proxysql 2026-03-26 04:04:13.978574 | orchestrator | 9ccbd30640c7 registry.osism.tech/kolla/release/haproxy:2.8.15.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) haproxy 2026-03-26 04:04:13.978585 | orchestrator | 97d9ef633eb8 registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-03-26 04:04:13.978596 | orchestrator | cd160ef7c762 registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-03-26 04:04:13.978607 | orchestrator | dc42d1f63132 registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-03-26 04:04:14.307459 | orchestrator | 2026-03-26 04:04:14.307560 | orchestrator | ## Images @ testbed-node-0 2026-03-26 04:04:14.307577 | orchestrator | 2026-03-26 04:04:14.307587 | orchestrator | + echo 2026-03-26 04:04:14.307598 | orchestrator | + echo '## Images @ testbed-node-0' 2026-03-26 04:04:14.307608 | orchestrator | + echo 2026-03-26 04:04:14.307617 | orchestrator | + osism container testbed-node-0 images 2026-03-26 04:04:16.803315 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-03-26 04:04:16.803409 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 3 months ago 322MB 2026-03-26 04:04:16.803436 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 3 months ago 266MB 2026-03-26 04:04:16.803446 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 3 months ago 1.56GB 2026-03-26 04:04:16.803471 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 3 months ago 276MB 2026-03-26 04:04:16.803480 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 3 months ago 1.53GB 2026-03-26 04:04:16.803488 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 3 months ago 669MB 2026-03-26 04:04:16.803496 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 3 months ago 265MB 2026-03-26 04:04:16.803504 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 3 months ago 1.02GB 2026-03-26 04:04:16.803512 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 3 months ago 412MB 2026-03-26 04:04:16.803520 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 3 months ago 274MB 2026-03-26 04:04:16.803528 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 3 months ago 578MB 2026-03-26 04:04:16.803536 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 3 months ago 273MB 2026-03-26 04:04:16.803544 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 3 months ago 273MB 2026-03-26 04:04:16.803551 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 3 months ago 452MB 2026-03-26 04:04:16.803559 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 3 months ago 1.15GB 2026-03-26 04:04:16.803567 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 3 months ago 301MB 2026-03-26 04:04:16.803575 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 3 months ago 298MB 2026-03-26 04:04:16.803598 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 3 months ago 357MB 2026-03-26 04:04:16.803606 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 3 months ago 292MB 2026-03-26 04:04:16.803614 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 3 months ago 305MB 2026-03-26 04:04:16.803622 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 3 months ago 279MB 2026-03-26 04:04:16.803630 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 3 months ago 975MB 2026-03-26 04:04:16.803638 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 3 months ago 279MB 2026-03-26 04:04:16.803646 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 3 months ago 1.37GB 2026-03-26 04:04:16.803658 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 3 months ago 1.21GB 2026-03-26 04:04:16.803667 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 3 months ago 1.21GB 2026-03-26 04:04:16.803674 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 3 months ago 1.21GB 2026-03-26 04:04:16.803682 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.2.20251130 37cb6975d4a5 3 months ago 976MB 2026-03-26 04:04:16.803690 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.2.20251130 bb2927b293dc 3 months ago 976MB 2026-03-26 04:04:16.803704 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 3 months ago 1.13GB 2026-03-26 04:04:16.803712 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 3 months ago 1.24GB 2026-03-26 04:04:16.804073 | orchestrator | registry.osism.tech/kolla/release/manila-share 19.1.1.20251130 df44f491f2c1 3 months ago 1.22GB 2026-03-26 04:04:16.804092 | orchestrator | registry.osism.tech/kolla/release/manila-data 19.1.1.20251130 cd8b74c8a47a 3 months ago 1.06GB 2026-03-26 04:04:16.804102 | orchestrator | registry.osism.tech/kolla/release/manila-api 19.1.1.20251130 654f9bd3c940 3 months ago 1.05GB 2026-03-26 04:04:16.804112 | orchestrator | registry.osism.tech/kolla/release/manila-scheduler 19.1.1.20251130 e0864fa03a78 3 months ago 1.05GB 2026-03-26 04:04:16.804122 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20251130 1e68c23a9d38 3 months ago 974MB 2026-03-26 04:04:16.804131 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20251130 1726a7592f93 3 months ago 974MB 2026-03-26 04:04:16.804142 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20251130 abbd6e9f87e2 3 months ago 974MB 2026-03-26 04:04:16.804151 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20251130 82a64f1d056d 3 months ago 973MB 2026-03-26 04:04:16.804160 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 3 months ago 991MB 2026-03-26 04:04:16.804170 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 3 months ago 991MB 2026-03-26 04:04:16.804179 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 3 months ago 990MB 2026-03-26 04:04:16.804188 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 3 months ago 1.09GB 2026-03-26 04:04:16.804197 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 3 months ago 1.04GB 2026-03-26 04:04:16.804206 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 3 months ago 1.04GB 2026-03-26 04:04:16.804215 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 3 months ago 1.03GB 2026-03-26 04:04:16.804231 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 3 months ago 1.03GB 2026-03-26 04:04:16.804240 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 3 months ago 1.05GB 2026-03-26 04:04:16.804247 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 3 months ago 1.03GB 2026-03-26 04:04:16.804255 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 3 months ago 1.05GB 2026-03-26 04:04:16.804263 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 3 months ago 1.16GB 2026-03-26 04:04:16.804271 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 3 months ago 1.1GB 2026-03-26 04:04:16.804279 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 3 months ago 983MB 2026-03-26 04:04:16.804343 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 3 months ago 989MB 2026-03-26 04:04:16.804352 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 3 months ago 984MB 2026-03-26 04:04:16.804369 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 3 months ago 984MB 2026-03-26 04:04:16.804377 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 3 months ago 989MB 2026-03-26 04:04:16.804385 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 3 months ago 984MB 2026-03-26 04:04:16.804392 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20251130 20430a0acd38 3 months ago 1.05GB 2026-03-26 04:04:16.804400 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20251130 20bbe1600b66 3 months ago 990MB 2026-03-26 04:04:16.804408 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 3 months ago 1.72GB 2026-03-26 04:04:16.804416 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 3 months ago 1.4GB 2026-03-26 04:04:16.804424 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 3 months ago 1.41GB 2026-03-26 04:04:16.804439 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 3 months ago 1.4GB 2026-03-26 04:04:16.804447 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 3 months ago 840MB 2026-03-26 04:04:16.804455 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 3 months ago 840MB 2026-03-26 04:04:16.804462 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 3 months ago 840MB 2026-03-26 04:04:16.804470 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 3 months ago 840MB 2026-03-26 04:04:16.804534 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 10 months ago 1.27GB 2026-03-26 04:04:17.113633 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-03-26 04:04:17.114496 | orchestrator | ++ semver 9.5.0 5.0.0 2026-03-26 04:04:17.176834 | orchestrator | 2026-03-26 04:04:17.176953 | orchestrator | ## Containers @ testbed-node-1 2026-03-26 04:04:17.176986 | orchestrator | 2026-03-26 04:04:17.177005 | orchestrator | + [[ 1 -eq -1 ]] 2026-03-26 04:04:17.177024 | orchestrator | + echo 2026-03-26 04:04:17.177045 | orchestrator | + echo '## Containers @ testbed-node-1' 2026-03-26 04:04:17.177065 | orchestrator | + echo 2026-03-26 04:04:17.177084 | orchestrator | + osism container testbed-node-1 ps 2026-03-26 04:04:19.610480 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-03-26 04:04:19.610553 | orchestrator | 5595f7aa7bde registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_conductor 2026-03-26 04:04:19.610560 | orchestrator | e70175a7f54f registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_api 2026-03-26 04:04:19.610565 | orchestrator | 3947fc72dc2b registry.osism.tech/kolla/release/grafana:12.3.0.20251130 "dumb-init --single-…" 6 minutes ago Up 6 minutes grafana 2026-03-26 04:04:19.610569 | orchestrator | 79d75ef2089b registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_elasticsearch_exporter 2026-03-26 04:04:19.610589 | orchestrator | 7bfe76f30932 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_cadvisor 2026-03-26 04:04:19.610608 | orchestrator | c528ae2db2f9 registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_memcached_exporter 2026-03-26 04:04:19.610612 | orchestrator | 34200a94a9bb registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_mysqld_exporter 2026-03-26 04:04:19.610619 | orchestrator | a619c7d8b861 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_node_exporter 2026-03-26 04:04:19.610623 | orchestrator | 31eed0c039c6 registry.osism.tech/kolla/release/manila-share:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_share 2026-03-26 04:04:19.610627 | orchestrator | d7c831ca33ae registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_scheduler 2026-03-26 04:04:19.610631 | orchestrator | ac2ea9870602 registry.osism.tech/kolla/release/manila-data:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_data 2026-03-26 04:04:19.610635 | orchestrator | a5e88825bb8f registry.osism.tech/kolla/release/manila-api:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_api 2026-03-26 04:04:19.610638 | orchestrator | 7e841d0eab1a registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_notifier 2026-03-26 04:04:19.610642 | orchestrator | 281374c8a420 registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_listener 2026-03-26 04:04:19.610646 | orchestrator | c732d68cf8a0 registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_evaluator 2026-03-26 04:04:19.610650 | orchestrator | c34a9fd30b59 registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_api 2026-03-26 04:04:19.610654 | orchestrator | 8438fd6d7aab registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes ceilometer_central 2026-03-26 04:04:19.610658 | orchestrator | 30fc5e345a33 registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) ceilometer_notification 2026-03-26 04:04:19.610777 | orchestrator | 6eb7797a56dd registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_worker 2026-03-26 04:04:19.610784 | orchestrator | 0fa943cb70b5 registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_housekeeping 2026-03-26 04:04:19.610788 | orchestrator | 9ccee7b0750c registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_health_manager 2026-03-26 04:04:19.610792 | orchestrator | ff30b926a413 registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes octavia_driver_agent 2026-03-26 04:04:19.610796 | orchestrator | dbbf2a638521 registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_api 2026-03-26 04:04:19.610804 | orchestrator | ab634d007474 registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_worker 2026-03-26 04:04:19.610808 | orchestrator | 80b69c1f45f8 registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_mdns 2026-03-26 04:04:19.610812 | orchestrator | 0c654d6208e2 registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_producer 2026-03-26 04:04:19.610816 | orchestrator | c9a047b3172f registry.osism.tech/kolla/release/designate-central:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_central 2026-03-26 04:04:19.610823 | orchestrator | c3785a6cb03e registry.osism.tech/kolla/release/designate-api:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_api 2026-03-26 04:04:19.610827 | orchestrator | 8f6b5ce65f81 registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_backend_bind9 2026-03-26 04:04:19.610831 | orchestrator | b5df48747341 registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_worker 2026-03-26 04:04:19.610835 | orchestrator | 97c106d5b111 registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_keystone_listener 2026-03-26 04:04:19.610839 | orchestrator | b695ff5dbfeb registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) barbican_api 2026-03-26 04:04:19.610843 | orchestrator | 3740fc0af171 registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_backup 2026-03-26 04:04:19.610847 | orchestrator | 6e071ad5926e registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130 "dumb-init --single-…" 32 minutes ago Up 31 minutes (healthy) cinder_volume 2026-03-26 04:04:19.610851 | orchestrator | e794b78209c8 registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) cinder_scheduler 2026-03-26 04:04:19.610855 | orchestrator | acd586036117 registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) cinder_api 2026-03-26 04:04:19.610859 | orchestrator | e56028a9ebc8 registry.osism.tech/kolla/release/glance-api:29.0.1.20251130 "dumb-init --single-…" 34 minutes ago Up 34 minutes (healthy) glance_api 2026-03-26 04:04:19.610862 | orchestrator | 06f4dc20a799 registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130 "dumb-init --single-…" 37 minutes ago Up 37 minutes (healthy) skyline_console 2026-03-26 04:04:19.610871 | orchestrator | b30b39ae1f62 registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130 "dumb-init --single-…" 37 minutes ago Up 37 minutes (healthy) skyline_apiserver 2026-03-26 04:04:19.610875 | orchestrator | 12aa037e6068 registry.osism.tech/kolla/release/horizon:25.1.2.20251130 "dumb-init --single-…" 39 minutes ago Up 39 minutes (healthy) horizon 2026-03-26 04:04:19.610879 | orchestrator | 5905244ae562 registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130 "dumb-init --single-…" 43 minutes ago Up 43 minutes (healthy) nova_novncproxy 2026-03-26 04:04:19.610886 | orchestrator | 3f86776bbdd0 registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130 "dumb-init --single-…" 43 minutes ago Up 43 minutes (healthy) nova_conductor 2026-03-26 04:04:19.610890 | orchestrator | 39ee1db54373 registry.osism.tech/kolla/release/nova-api:30.2.1.20251130 "dumb-init --single-…" 45 minutes ago Up 44 minutes (healthy) nova_api 2026-03-26 04:04:19.610894 | orchestrator | 17d85da2dc3e registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130 "dumb-init --single-…" 45 minutes ago Up 45 minutes (healthy) nova_scheduler 2026-03-26 04:04:19.610898 | orchestrator | 8ecb795d537e registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130 "dumb-init --single-…" 50 minutes ago Up 50 minutes (healthy) neutron_server 2026-03-26 04:04:19.610902 | orchestrator | 2c6fcd0fb49d registry.osism.tech/kolla/release/placement-api:12.0.1.20251130 "dumb-init --single-…" 53 minutes ago Up 53 minutes (healthy) placement_api 2026-03-26 04:04:19.610906 | orchestrator | 2f709020f008 registry.osism.tech/kolla/release/keystone:26.0.1.20251130 "dumb-init --single-…" 56 minutes ago Up 56 minutes (healthy) keystone 2026-03-26 04:04:19.610909 | orchestrator | eded742115a7 registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130 "dumb-init --single-…" 56 minutes ago Up 56 minutes (healthy) keystone_fernet 2026-03-26 04:04:19.610913 | orchestrator | 6c890b259434 registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130 "dumb-init --single-…" 56 minutes ago Up 56 minutes (healthy) keystone_ssh 2026-03-26 04:04:19.610917 | orchestrator | ef5500dbe92f registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 58 minutes ago Up 58 minutes ceph-mgr-testbed-node-1 2026-03-26 04:04:19.612472 | orchestrator | 4f9927e71ae2 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" About an hour ago Up About an hour ceph-crash-testbed-node-1 2026-03-26 04:04:19.612483 | orchestrator | 1fb5a820b9f6 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" About an hour ago Up About an hour ceph-mon-testbed-node-1 2026-03-26 04:04:19.612487 | orchestrator | 7afd57013429 registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_northd 2026-03-26 04:04:19.612491 | orchestrator | eb26e25add2e registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_sb_db 2026-03-26 04:04:19.612498 | orchestrator | 4474d5901cb6 registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_nb_db 2026-03-26 04:04:19.612503 | orchestrator | f3366fda0db6 registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_controller 2026-03-26 04:04:19.612506 | orchestrator | 8eea01b43193 registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_vswitchd 2026-03-26 04:04:19.612510 | orchestrator | fd6d0d5f070c registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_db 2026-03-26 04:04:19.612519 | orchestrator | cbec62d736b0 registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) rabbitmq 2026-03-26 04:04:19.612523 | orchestrator | a2c9b8c73ba7 registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130 "dumb-init -- kolla_…" About an hour ago Up About an hour (healthy) mariadb 2026-03-26 04:04:19.612527 | orchestrator | de191e12b2ba registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis_sentinel 2026-03-26 04:04:19.612530 | orchestrator | f4bf64deb003 registry.osism.tech/kolla/release/redis:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis 2026-03-26 04:04:19.612535 | orchestrator | 04c02bb470b8 registry.osism.tech/kolla/release/memcached:1.6.24.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) memcached 2026-03-26 04:04:19.612539 | orchestrator | c5a6b7eb3732 registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch_dashboards 2026-03-26 04:04:19.612543 | orchestrator | 16973c91b145 registry.osism.tech/kolla/release/opensearch:2.19.4.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) opensearch 2026-03-26 04:04:19.612546 | orchestrator | 88a35f9e4664 registry.osism.tech/kolla/release/keepalived:2.2.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours keepalived 2026-03-26 04:04:19.612550 | orchestrator | bf5b20e3ad39 registry.osism.tech/kolla/release/proxysql:3.0.3.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) proxysql 2026-03-26 04:04:19.612554 | orchestrator | a55c8b3a0c88 registry.osism.tech/kolla/release/haproxy:2.8.15.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) haproxy 2026-03-26 04:04:19.612558 | orchestrator | b85bba8a226d registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-03-26 04:04:19.612562 | orchestrator | bfa74148a0c1 registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-03-26 04:04:19.612569 | orchestrator | 0f65f0407b45 registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-03-26 04:04:19.934354 | orchestrator | 2026-03-26 04:04:19.934447 | orchestrator | ## Images @ testbed-node-1 2026-03-26 04:04:19.934461 | orchestrator | 2026-03-26 04:04:19.934472 | orchestrator | + echo 2026-03-26 04:04:19.934483 | orchestrator | + echo '## Images @ testbed-node-1' 2026-03-26 04:04:19.934494 | orchestrator | + echo 2026-03-26 04:04:19.934504 | orchestrator | + osism container testbed-node-1 images 2026-03-26 04:04:22.373303 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-03-26 04:04:22.373437 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 3 months ago 322MB 2026-03-26 04:04:22.373453 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 3 months ago 266MB 2026-03-26 04:04:22.373465 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 3 months ago 1.56GB 2026-03-26 04:04:22.373478 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 3 months ago 1.53GB 2026-03-26 04:04:22.373489 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 3 months ago 276MB 2026-03-26 04:04:22.373526 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 3 months ago 669MB 2026-03-26 04:04:22.373538 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 3 months ago 265MB 2026-03-26 04:04:22.373549 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 3 months ago 1.02GB 2026-03-26 04:04:22.373560 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 3 months ago 412MB 2026-03-26 04:04:22.373571 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 3 months ago 274MB 2026-03-26 04:04:22.373582 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 3 months ago 578MB 2026-03-26 04:04:22.373593 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 3 months ago 273MB 2026-03-26 04:04:22.373604 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 3 months ago 273MB 2026-03-26 04:04:22.373615 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 3 months ago 452MB 2026-03-26 04:04:22.373626 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 3 months ago 1.15GB 2026-03-26 04:04:22.373637 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 3 months ago 301MB 2026-03-26 04:04:22.373648 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 3 months ago 298MB 2026-03-26 04:04:22.373659 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 3 months ago 357MB 2026-03-26 04:04:22.373670 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 3 months ago 292MB 2026-03-26 04:04:22.373698 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 3 months ago 305MB 2026-03-26 04:04:22.373710 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 3 months ago 279MB 2026-03-26 04:04:22.373721 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 3 months ago 279MB 2026-03-26 04:04:22.373774 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 3 months ago 975MB 2026-03-26 04:04:22.373787 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 3 months ago 1.37GB 2026-03-26 04:04:22.373798 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 3 months ago 1.21GB 2026-03-26 04:04:22.373809 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 3 months ago 1.21GB 2026-03-26 04:04:22.373826 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 3 months ago 1.21GB 2026-03-26 04:04:22.373839 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.2.20251130 37cb6975d4a5 3 months ago 976MB 2026-03-26 04:04:22.373852 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.2.20251130 bb2927b293dc 3 months ago 976MB 2026-03-26 04:04:22.373864 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 3 months ago 1.13GB 2026-03-26 04:04:22.373878 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 3 months ago 1.24GB 2026-03-26 04:04:22.373918 | orchestrator | registry.osism.tech/kolla/release/manila-share 19.1.1.20251130 df44f491f2c1 3 months ago 1.22GB 2026-03-26 04:04:22.373932 | orchestrator | registry.osism.tech/kolla/release/manila-data 19.1.1.20251130 cd8b74c8a47a 3 months ago 1.06GB 2026-03-26 04:04:22.373945 | orchestrator | registry.osism.tech/kolla/release/manila-api 19.1.1.20251130 654f9bd3c940 3 months ago 1.05GB 2026-03-26 04:04:22.373958 | orchestrator | registry.osism.tech/kolla/release/manila-scheduler 19.1.1.20251130 e0864fa03a78 3 months ago 1.05GB 2026-03-26 04:04:22.373971 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20251130 1e68c23a9d38 3 months ago 974MB 2026-03-26 04:04:22.373984 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20251130 1726a7592f93 3 months ago 974MB 2026-03-26 04:04:22.373996 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20251130 abbd6e9f87e2 3 months ago 974MB 2026-03-26 04:04:22.374009 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20251130 82a64f1d056d 3 months ago 973MB 2026-03-26 04:04:22.374093 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 3 months ago 991MB 2026-03-26 04:04:22.374114 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 3 months ago 991MB 2026-03-26 04:04:22.374135 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 3 months ago 990MB 2026-03-26 04:04:22.374156 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 3 months ago 1.09GB 2026-03-26 04:04:22.374176 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 3 months ago 1.04GB 2026-03-26 04:04:22.374197 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 3 months ago 1.04GB 2026-03-26 04:04:22.374217 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 3 months ago 1.03GB 2026-03-26 04:04:22.374237 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 3 months ago 1.03GB 2026-03-26 04:04:22.374252 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 3 months ago 1.05GB 2026-03-26 04:04:22.374263 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 3 months ago 1.03GB 2026-03-26 04:04:22.374274 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 3 months ago 1.05GB 2026-03-26 04:04:22.374285 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 3 months ago 1.16GB 2026-03-26 04:04:22.374296 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 3 months ago 1.1GB 2026-03-26 04:04:22.374307 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 3 months ago 983MB 2026-03-26 04:04:22.374317 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 3 months ago 989MB 2026-03-26 04:04:22.374328 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 3 months ago 984MB 2026-03-26 04:04:22.374340 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 3 months ago 984MB 2026-03-26 04:04:22.374351 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 3 months ago 989MB 2026-03-26 04:04:22.374371 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 3 months ago 984MB 2026-03-26 04:04:22.374382 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20251130 20430a0acd38 3 months ago 1.05GB 2026-03-26 04:04:22.374393 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20251130 20bbe1600b66 3 months ago 990MB 2026-03-26 04:04:22.374403 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 3 months ago 1.72GB 2026-03-26 04:04:22.374414 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 3 months ago 1.4GB 2026-03-26 04:04:22.374425 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 3 months ago 1.41GB 2026-03-26 04:04:22.374446 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 3 months ago 1.4GB 2026-03-26 04:04:22.374457 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 3 months ago 840MB 2026-03-26 04:04:22.374468 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 3 months ago 840MB 2026-03-26 04:04:22.374479 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 3 months ago 840MB 2026-03-26 04:04:22.374490 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 3 months ago 840MB 2026-03-26 04:04:22.374501 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 10 months ago 1.27GB 2026-03-26 04:04:22.691204 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-03-26 04:04:22.691658 | orchestrator | ++ semver 9.5.0 5.0.0 2026-03-26 04:04:22.743181 | orchestrator | 2026-03-26 04:04:22.743262 | orchestrator | ## Containers @ testbed-node-2 2026-03-26 04:04:22.743277 | orchestrator | 2026-03-26 04:04:22.743287 | orchestrator | + [[ 1 -eq -1 ]] 2026-03-26 04:04:22.743298 | orchestrator | + echo 2026-03-26 04:04:22.743309 | orchestrator | + echo '## Containers @ testbed-node-2' 2026-03-26 04:04:22.743319 | orchestrator | + echo 2026-03-26 04:04:22.743329 | orchestrator | + osism container testbed-node-2 ps 2026-03-26 04:04:25.219652 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-03-26 04:04:25.219875 | orchestrator | cb5a2e0036ae registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_conductor 2026-03-26 04:04:25.219907 | orchestrator | b1fc9f76bfe1 registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_api 2026-03-26 04:04:25.219924 | orchestrator | 79bb3cfa27d7 registry.osism.tech/kolla/release/grafana:12.3.0.20251130 "dumb-init --single-…" 6 minutes ago Up 6 minutes grafana 2026-03-26 04:04:25.219942 | orchestrator | 74f7548606e0 registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_elasticsearch_exporter 2026-03-26 04:04:25.219955 | orchestrator | 6ff2ddb90324 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_cadvisor 2026-03-26 04:04:25.219965 | orchestrator | 7541b568abd6 registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_memcached_exporter 2026-03-26 04:04:25.219976 | orchestrator | c8e9e481f34b registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_mysqld_exporter 2026-03-26 04:04:25.220019 | orchestrator | 91072cae9492 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_node_exporter 2026-03-26 04:04:25.220030 | orchestrator | e9bb963e7218 registry.osism.tech/kolla/release/manila-share:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_share 2026-03-26 04:04:25.220041 | orchestrator | 12a9083a7490 registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_scheduler 2026-03-26 04:04:25.220050 | orchestrator | 4f365a87ab5f registry.osism.tech/kolla/release/manila-data:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_data 2026-03-26 04:04:25.220066 | orchestrator | 42b0d560762f registry.osism.tech/kolla/release/manila-api:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_api 2026-03-26 04:04:25.221140 | orchestrator | 4511ec114d0d registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_notifier 2026-03-26 04:04:25.221166 | orchestrator | 39c62c61e597 registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_listener 2026-03-26 04:04:25.221176 | orchestrator | 4eb2be969478 registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_evaluator 2026-03-26 04:04:25.221186 | orchestrator | 3dd95017f326 registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_api 2026-03-26 04:04:25.221196 | orchestrator | 93d1c0dc0e86 registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes ceilometer_central 2026-03-26 04:04:25.221206 | orchestrator | c225d72455b3 registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) ceilometer_notification 2026-03-26 04:04:25.221215 | orchestrator | a01140bcf6bf registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_worker 2026-03-26 04:04:25.221225 | orchestrator | 0056447f6cdc registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_housekeeping 2026-03-26 04:04:25.221235 | orchestrator | 23eb3d9e27c0 registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_health_manager 2026-03-26 04:04:25.221245 | orchestrator | 649fe0554c73 registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes octavia_driver_agent 2026-03-26 04:04:25.221254 | orchestrator | 87e3f0068950 registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_api 2026-03-26 04:04:25.221264 | orchestrator | dc2d6c4aae9a registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_worker 2026-03-26 04:04:25.221285 | orchestrator | cd5dacfed1b8 registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_mdns 2026-03-26 04:04:25.221295 | orchestrator | c0e8307f4672 registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_producer 2026-03-26 04:04:25.221305 | orchestrator | 68bd368adfdc registry.osism.tech/kolla/release/designate-central:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_central 2026-03-26 04:04:25.221315 | orchestrator | c51595c11699 registry.osism.tech/kolla/release/designate-api:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_api 2026-03-26 04:04:25.221325 | orchestrator | 8dffa9e9b29d registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_backend_bind9 2026-03-26 04:04:25.221334 | orchestrator | 6d0216ea6747 registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_worker 2026-03-26 04:04:25.221344 | orchestrator | 2ad9f138b268 registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) barbican_keystone_listener 2026-03-26 04:04:25.221354 | orchestrator | 61e3accb5af7 registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) barbican_api 2026-03-26 04:04:25.221375 | orchestrator | 4c7ed3dcd5fc registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_backup 2026-03-26 04:04:25.221385 | orchestrator | cf0ff34a99c9 registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) cinder_volume 2026-03-26 04:04:25.221395 | orchestrator | 67e117d5e66e registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) cinder_scheduler 2026-03-26 04:04:25.221410 | orchestrator | 70fa2260ff57 registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) cinder_api 2026-03-26 04:04:25.221427 | orchestrator | 799f8b17e49d registry.osism.tech/kolla/release/glance-api:29.0.1.20251130 "dumb-init --single-…" 34 minutes ago Up 34 minutes (healthy) glance_api 2026-03-26 04:04:25.221443 | orchestrator | 1230e722ebb3 registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130 "dumb-init --single-…" 37 minutes ago Up 37 minutes (healthy) skyline_console 2026-03-26 04:04:25.221468 | orchestrator | 1b7da13de50d registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130 "dumb-init --single-…" 37 minutes ago Up 37 minutes (healthy) skyline_apiserver 2026-03-26 04:04:25.221484 | orchestrator | 8142e29746fa registry.osism.tech/kolla/release/horizon:25.1.2.20251130 "dumb-init --single-…" 39 minutes ago Up 39 minutes (healthy) horizon 2026-03-26 04:04:25.221499 | orchestrator | 13a1d439e229 registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130 "dumb-init --single-…" 43 minutes ago Up 43 minutes (healthy) nova_novncproxy 2026-03-26 04:04:25.221516 | orchestrator | 3df27d3081d1 registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130 "dumb-init --single-…" 43 minutes ago Up 43 minutes (healthy) nova_conductor 2026-03-26 04:04:25.221542 | orchestrator | 02286aff1530 registry.osism.tech/kolla/release/nova-api:30.2.1.20251130 "dumb-init --single-…" 45 minutes ago Up 45 minutes (healthy) nova_api 2026-03-26 04:04:25.221611 | orchestrator | 69a542a499a4 registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130 "dumb-init --single-…" 45 minutes ago Up 45 minutes (healthy) nova_scheduler 2026-03-26 04:04:25.221632 | orchestrator | 04863e81a670 registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130 "dumb-init --single-…" 50 minutes ago Up 50 minutes (healthy) neutron_server 2026-03-26 04:04:25.221650 | orchestrator | 943c7539f102 registry.osism.tech/kolla/release/placement-api:12.0.1.20251130 "dumb-init --single-…" 54 minutes ago Up 54 minutes (healthy) placement_api 2026-03-26 04:04:25.221664 | orchestrator | 1795602436d0 registry.osism.tech/kolla/release/keystone:26.0.1.20251130 "dumb-init --single-…" 56 minutes ago Up 56 minutes (healthy) keystone 2026-03-26 04:04:25.221674 | orchestrator | 390c789c5ca0 registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130 "dumb-init --single-…" 56 minutes ago Up 56 minutes (healthy) keystone_fernet 2026-03-26 04:04:25.221683 | orchestrator | 32eece189277 registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130 "dumb-init --single-…" 56 minutes ago Up 56 minutes (healthy) keystone_ssh 2026-03-26 04:04:25.221693 | orchestrator | 4373766b4740 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 58 minutes ago Up 58 minutes ceph-mgr-testbed-node-2 2026-03-26 04:04:25.221703 | orchestrator | c7fe63b73697 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" About an hour ago Up About an hour ceph-crash-testbed-node-2 2026-03-26 04:04:25.221713 | orchestrator | 2a382ea60872 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" About an hour ago Up About an hour ceph-mon-testbed-node-2 2026-03-26 04:04:25.221729 | orchestrator | 09e52c0af1ac registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_northd 2026-03-26 04:04:25.221870 | orchestrator | 5cd3e7218a58 registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_sb_db 2026-03-26 04:04:25.221885 | orchestrator | 6a6022c2623e registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_nb_db 2026-03-26 04:04:25.221896 | orchestrator | 0e57fca16035 registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_controller 2026-03-26 04:04:25.221908 | orchestrator | b5135bee0b88 registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_vswitchd 2026-03-26 04:04:25.221920 | orchestrator | 2e496286f4d6 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_db 2026-03-26 04:04:25.221931 | orchestrator | 774e79736876 registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) rabbitmq 2026-03-26 04:04:25.221942 | orchestrator | a33e8c9a11ce registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130 "dumb-init -- kolla_…" About an hour ago Up About an hour (healthy) mariadb 2026-03-26 04:04:25.221963 | orchestrator | dc6520f0098a registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis_sentinel 2026-03-26 04:04:25.221974 | orchestrator | 620d722dcf31 registry.osism.tech/kolla/release/redis:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis 2026-03-26 04:04:25.221986 | orchestrator | a26c32c37c05 registry.osism.tech/kolla/release/memcached:1.6.24.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) memcached 2026-03-26 04:04:25.221998 | orchestrator | 5df26a02d819 registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch_dashboards 2026-03-26 04:04:25.222009 | orchestrator | a0ce1c2c8df2 registry.osism.tech/kolla/release/opensearch:2.19.4.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) opensearch 2026-03-26 04:04:25.222067 | orchestrator | 35762f95d946 registry.osism.tech/kolla/release/keepalived:2.2.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours keepalived 2026-03-26 04:04:25.222080 | orchestrator | 4a8d2c92eea7 registry.osism.tech/kolla/release/proxysql:3.0.3.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) proxysql 2026-03-26 04:04:25.222092 | orchestrator | 6a2161fe9af1 registry.osism.tech/kolla/release/haproxy:2.8.15.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) haproxy 2026-03-26 04:04:25.222103 | orchestrator | c05e2866aaa1 registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-03-26 04:04:25.222114 | orchestrator | d6f4b784a67c registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-03-26 04:04:25.222124 | orchestrator | 7742d32707c2 registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-03-26 04:04:25.546213 | orchestrator | 2026-03-26 04:04:25.546334 | orchestrator | ## Images @ testbed-node-2 2026-03-26 04:04:25.546359 | orchestrator | 2026-03-26 04:04:25.546378 | orchestrator | + echo 2026-03-26 04:04:25.546397 | orchestrator | + echo '## Images @ testbed-node-2' 2026-03-26 04:04:25.546416 | orchestrator | + echo 2026-03-26 04:04:25.546434 | orchestrator | + osism container testbed-node-2 images 2026-03-26 04:04:27.917504 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-03-26 04:04:27.917609 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 3 months ago 322MB 2026-03-26 04:04:27.917623 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 3 months ago 266MB 2026-03-26 04:04:27.917636 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 3 months ago 1.56GB 2026-03-26 04:04:27.917648 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 3 months ago 276MB 2026-03-26 04:04:27.917659 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 3 months ago 1.53GB 2026-03-26 04:04:27.917670 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 3 months ago 669MB 2026-03-26 04:04:27.917681 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 3 months ago 265MB 2026-03-26 04:04:27.917715 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 3 months ago 1.02GB 2026-03-26 04:04:27.917727 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 3 months ago 412MB 2026-03-26 04:04:27.917789 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 3 months ago 274MB 2026-03-26 04:04:27.917806 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 3 months ago 578MB 2026-03-26 04:04:27.917818 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 3 months ago 273MB 2026-03-26 04:04:27.917829 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 3 months ago 273MB 2026-03-26 04:04:27.917840 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 3 months ago 452MB 2026-03-26 04:04:27.917867 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 3 months ago 1.15GB 2026-03-26 04:04:27.917879 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 3 months ago 301MB 2026-03-26 04:04:27.917889 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 3 months ago 298MB 2026-03-26 04:04:27.917901 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 3 months ago 357MB 2026-03-26 04:04:27.917911 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 3 months ago 292MB 2026-03-26 04:04:27.917922 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 3 months ago 305MB 2026-03-26 04:04:27.917933 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 3 months ago 279MB 2026-03-26 04:04:27.917944 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 3 months ago 975MB 2026-03-26 04:04:27.917955 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 3 months ago 279MB 2026-03-26 04:04:27.917966 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 3 months ago 1.37GB 2026-03-26 04:04:27.917977 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 3 months ago 1.21GB 2026-03-26 04:04:27.917988 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 3 months ago 1.21GB 2026-03-26 04:04:27.917999 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 3 months ago 1.21GB 2026-03-26 04:04:27.918010 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.2.20251130 37cb6975d4a5 3 months ago 976MB 2026-03-26 04:04:27.918079 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.2.20251130 bb2927b293dc 3 months ago 976MB 2026-03-26 04:04:27.918094 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 3 months ago 1.13GB 2026-03-26 04:04:27.918107 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 3 months ago 1.24GB 2026-03-26 04:04:27.918138 | orchestrator | registry.osism.tech/kolla/release/manila-share 19.1.1.20251130 df44f491f2c1 3 months ago 1.22GB 2026-03-26 04:04:27.918152 | orchestrator | registry.osism.tech/kolla/release/manila-data 19.1.1.20251130 cd8b74c8a47a 3 months ago 1.06GB 2026-03-26 04:04:27.918180 | orchestrator | registry.osism.tech/kolla/release/manila-api 19.1.1.20251130 654f9bd3c940 3 months ago 1.05GB 2026-03-26 04:04:27.918194 | orchestrator | registry.osism.tech/kolla/release/manila-scheduler 19.1.1.20251130 e0864fa03a78 3 months ago 1.05GB 2026-03-26 04:04:27.918238 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20251130 1e68c23a9d38 3 months ago 974MB 2026-03-26 04:04:27.918252 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20251130 1726a7592f93 3 months ago 974MB 2026-03-26 04:04:27.918265 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20251130 abbd6e9f87e2 3 months ago 974MB 2026-03-26 04:04:27.918278 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20251130 82a64f1d056d 3 months ago 973MB 2026-03-26 04:04:27.918290 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 3 months ago 991MB 2026-03-26 04:04:27.918303 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 3 months ago 991MB 2026-03-26 04:04:27.918316 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 3 months ago 990MB 2026-03-26 04:04:27.918329 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 3 months ago 1.09GB 2026-03-26 04:04:27.918342 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 3 months ago 1.04GB 2026-03-26 04:04:27.918354 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 3 months ago 1.04GB 2026-03-26 04:04:27.918367 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 3 months ago 1.03GB 2026-03-26 04:04:27.918380 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 3 months ago 1.03GB 2026-03-26 04:04:27.918393 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 3 months ago 1.05GB 2026-03-26 04:04:27.918405 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 3 months ago 1.03GB 2026-03-26 04:04:27.918419 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 3 months ago 1.05GB 2026-03-26 04:04:27.918431 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 3 months ago 1.16GB 2026-03-26 04:04:27.918441 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 3 months ago 1.1GB 2026-03-26 04:04:27.918452 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 3 months ago 983MB 2026-03-26 04:04:27.918463 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 3 months ago 989MB 2026-03-26 04:04:27.918474 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 3 months ago 984MB 2026-03-26 04:04:27.918485 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 3 months ago 984MB 2026-03-26 04:04:27.918496 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 3 months ago 989MB 2026-03-26 04:04:27.918506 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 3 months ago 984MB 2026-03-26 04:04:27.918517 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20251130 20430a0acd38 3 months ago 1.05GB 2026-03-26 04:04:27.918535 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20251130 20bbe1600b66 3 months ago 990MB 2026-03-26 04:04:27.918546 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 3 months ago 1.72GB 2026-03-26 04:04:27.918557 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 3 months ago 1.4GB 2026-03-26 04:04:27.918568 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 3 months ago 1.41GB 2026-03-26 04:04:27.918585 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 3 months ago 1.4GB 2026-03-26 04:04:27.918597 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 3 months ago 840MB 2026-03-26 04:04:27.918608 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 3 months ago 840MB 2026-03-26 04:04:27.918619 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 3 months ago 840MB 2026-03-26 04:04:27.918630 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 3 months ago 840MB 2026-03-26 04:04:27.918641 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 10 months ago 1.27GB 2026-03-26 04:04:28.224323 | orchestrator | + sh -c /opt/configuration/scripts/check-services.sh 2026-03-26 04:04:28.229891 | orchestrator | + set -e 2026-03-26 04:04:28.229964 | orchestrator | + source /opt/manager-vars.sh 2026-03-26 04:04:28.229979 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-26 04:04:28.229990 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-26 04:04:28.229999 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-26 04:04:28.230009 | orchestrator | ++ CEPH_VERSION=reef 2026-03-26 04:04:28.230067 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-26 04:04:28.230079 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-26 04:04:28.230090 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-26 04:04:28.230100 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-26 04:04:28.230110 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-26 04:04:28.230120 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-26 04:04:28.230130 | orchestrator | ++ export ARA=false 2026-03-26 04:04:28.230140 | orchestrator | ++ ARA=false 2026-03-26 04:04:28.230150 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-26 04:04:28.230160 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-26 04:04:28.230170 | orchestrator | ++ export TEMPEST=false 2026-03-26 04:04:28.230179 | orchestrator | ++ TEMPEST=false 2026-03-26 04:04:28.230189 | orchestrator | ++ export IS_ZUUL=true 2026-03-26 04:04:28.230199 | orchestrator | ++ IS_ZUUL=true 2026-03-26 04:04:28.230209 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.54 2026-03-26 04:04:28.230220 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.54 2026-03-26 04:04:28.230229 | orchestrator | ++ export EXTERNAL_API=false 2026-03-26 04:04:28.230239 | orchestrator | ++ EXTERNAL_API=false 2026-03-26 04:04:28.230249 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-26 04:04:28.230258 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-26 04:04:28.230269 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-26 04:04:28.230279 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-26 04:04:28.230289 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-26 04:04:28.230299 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-26 04:04:28.230308 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-03-26 04:04:28.230318 | orchestrator | + sh -c /opt/configuration/scripts/check/100-ceph-with-ansible.sh 2026-03-26 04:04:28.239333 | orchestrator | + set -e 2026-03-26 04:04:28.239889 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-26 04:04:28.239917 | orchestrator | ++ export INTERACTIVE=false 2026-03-26 04:04:28.239931 | orchestrator | ++ INTERACTIVE=false 2026-03-26 04:04:28.239942 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-26 04:04:28.239953 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-26 04:04:28.239964 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-03-26 04:04:28.240552 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-03-26 04:04:28.248259 | orchestrator | 2026-03-26 04:04:28.248300 | orchestrator | # Ceph status 2026-03-26 04:04:28.248313 | orchestrator | 2026-03-26 04:04:28.248324 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-26 04:04:28.248336 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-26 04:04:28.248348 | orchestrator | + echo 2026-03-26 04:04:28.248359 | orchestrator | + echo '# Ceph status' 2026-03-26 04:04:28.248371 | orchestrator | + echo 2026-03-26 04:04:28.248382 | orchestrator | + ceph -s 2026-03-26 04:04:28.849107 | orchestrator | cluster: 2026-03-26 04:04:28.849209 | orchestrator | id: 11111111-1111-1111-1111-111111111111 2026-03-26 04:04:28.849226 | orchestrator | health: HEALTH_OK 2026-03-26 04:04:28.849247 | orchestrator | 2026-03-26 04:04:28.849265 | orchestrator | services: 2026-03-26 04:04:28.849277 | orchestrator | mon: 3 daemons, quorum testbed-node-0,testbed-node-1,testbed-node-2 (age 71m) 2026-03-26 04:04:28.849290 | orchestrator | mgr: testbed-node-1(active, since 58m), standbys: testbed-node-0, testbed-node-2 2026-03-26 04:04:28.849302 | orchestrator | mds: 1/1 daemons up, 2 standby 2026-03-26 04:04:28.849313 | orchestrator | osd: 6 osds: 6 up (since 67m), 6 in (since 68m) 2026-03-26 04:04:28.849325 | orchestrator | rgw: 3 daemons active (3 hosts, 1 zones) 2026-03-26 04:04:28.849336 | orchestrator | 2026-03-26 04:04:28.849348 | orchestrator | data: 2026-03-26 04:04:28.849359 | orchestrator | volumes: 1/1 healthy 2026-03-26 04:04:28.849371 | orchestrator | pools: 14 pools, 401 pgs 2026-03-26 04:04:28.849383 | orchestrator | objects: 556 objects, 2.2 GiB 2026-03-26 04:04:28.849394 | orchestrator | usage: 7.0 GiB used, 113 GiB / 120 GiB avail 2026-03-26 04:04:28.849405 | orchestrator | pgs: 401 active+clean 2026-03-26 04:04:28.849417 | orchestrator | 2026-03-26 04:04:28.889811 | orchestrator | 2026-03-26 04:04:28.889905 | orchestrator | # Ceph versions 2026-03-26 04:04:28.889921 | orchestrator | 2026-03-26 04:04:28.889934 | orchestrator | + echo 2026-03-26 04:04:28.889946 | orchestrator | + echo '# Ceph versions' 2026-03-26 04:04:28.889958 | orchestrator | + echo 2026-03-26 04:04:28.889969 | orchestrator | + ceph versions 2026-03-26 04:04:29.487635 | orchestrator | { 2026-03-26 04:04:29.487800 | orchestrator | "mon": { 2026-03-26 04:04:29.487825 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-03-26 04:04:29.487839 | orchestrator | }, 2026-03-26 04:04:29.487851 | orchestrator | "mgr": { 2026-03-26 04:04:29.487862 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-03-26 04:04:29.487873 | orchestrator | }, 2026-03-26 04:04:29.487884 | orchestrator | "osd": { 2026-03-26 04:04:29.487895 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 6 2026-03-26 04:04:29.487906 | orchestrator | }, 2026-03-26 04:04:29.487917 | orchestrator | "mds": { 2026-03-26 04:04:29.487928 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-03-26 04:04:29.487939 | orchestrator | }, 2026-03-26 04:04:29.487950 | orchestrator | "rgw": { 2026-03-26 04:04:29.487961 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-03-26 04:04:29.487971 | orchestrator | }, 2026-03-26 04:04:29.487982 | orchestrator | "overall": { 2026-03-26 04:04:29.488015 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 18 2026-03-26 04:04:29.488028 | orchestrator | } 2026-03-26 04:04:29.488039 | orchestrator | } 2026-03-26 04:04:29.538577 | orchestrator | 2026-03-26 04:04:29.538642 | orchestrator | # Ceph OSD tree 2026-03-26 04:04:29.538648 | orchestrator | 2026-03-26 04:04:29.538652 | orchestrator | + echo 2026-03-26 04:04:29.538656 | orchestrator | + echo '# Ceph OSD tree' 2026-03-26 04:04:29.538661 | orchestrator | + echo 2026-03-26 04:04:29.538665 | orchestrator | + ceph osd df tree 2026-03-26 04:04:30.047951 | orchestrator | ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME 2026-03-26 04:04:30.048089 | orchestrator | -1 0.11691 - 120 GiB 7.0 GiB 6.7 GiB 6 KiB 369 MiB 113 GiB 5.87 1.00 - root default 2026-03-26 04:04:30.048116 | orchestrator | -3 0.03897 - 40 GiB 2.3 GiB 2.2 GiB 2 KiB 123 MiB 38 GiB 5.87 1.00 - host testbed-node-3 2026-03-26 04:04:30.048137 | orchestrator | 0 hdd 0.01949 1.00000 20 GiB 840 MiB 779 MiB 1 KiB 62 MiB 19 GiB 4.11 0.70 189 up osd.0 2026-03-26 04:04:30.048154 | orchestrator | 3 hdd 0.01949 1.00000 20 GiB 1.5 GiB 1.5 GiB 1 KiB 62 MiB 18 GiB 7.63 1.30 201 up osd.3 2026-03-26 04:04:30.048212 | orchestrator | -5 0.03897 - 40 GiB 2.3 GiB 2.2 GiB 2 KiB 123 MiB 38 GiB 5.87 1.00 - host testbed-node-4 2026-03-26 04:04:30.048251 | orchestrator | 2 hdd 0.01949 1.00000 20 GiB 1.4 GiB 1.4 GiB 1 KiB 62 MiB 19 GiB 7.20 1.23 206 up osd.2 2026-03-26 04:04:30.048270 | orchestrator | 5 hdd 0.01949 1.00000 20 GiB 928 MiB 867 MiB 1 KiB 62 MiB 19 GiB 4.54 0.77 186 up osd.5 2026-03-26 04:04:30.048289 | orchestrator | -7 0.03897 - 40 GiB 2.3 GiB 2.2 GiB 2 KiB 123 MiB 38 GiB 5.87 1.00 - host testbed-node-5 2026-03-26 04:04:30.048306 | orchestrator | 1 hdd 0.01949 1.00000 20 GiB 1.2 GiB 1.1 GiB 1 KiB 62 MiB 19 GiB 5.95 1.01 192 up osd.1 2026-03-26 04:04:30.048323 | orchestrator | 4 hdd 0.01949 1.00000 20 GiB 1.2 GiB 1.1 GiB 1 KiB 62 MiB 19 GiB 5.79 0.99 196 up osd.4 2026-03-26 04:04:30.048340 | orchestrator | TOTAL 120 GiB 7.0 GiB 6.7 GiB 9.3 KiB 369 MiB 113 GiB 5.87 2026-03-26 04:04:30.048357 | orchestrator | MIN/MAX VAR: 0.70/1.30 STDDEV: 1.27 2026-03-26 04:04:30.097345 | orchestrator | 2026-03-26 04:04:30.097449 | orchestrator | # Ceph monitor status 2026-03-26 04:04:30.097466 | orchestrator | 2026-03-26 04:04:30.097478 | orchestrator | + echo 2026-03-26 04:04:30.097490 | orchestrator | + echo '# Ceph monitor status' 2026-03-26 04:04:30.097502 | orchestrator | + echo 2026-03-26 04:04:30.097513 | orchestrator | + ceph mon stat 2026-03-26 04:04:30.668244 | orchestrator | e1: 3 mons at {testbed-node-0=[v2:192.168.16.10:3300/0,v1:192.168.16.10:6789/0],testbed-node-1=[v2:192.168.16.11:3300/0,v1:192.168.16.11:6789/0],testbed-node-2=[v2:192.168.16.12:3300/0,v1:192.168.16.12:6789/0]} removed_ranks: {} disallowed_leaders: {}, election epoch 8, leader 0 testbed-node-0, quorum 0,1,2 testbed-node-0,testbed-node-1,testbed-node-2 2026-03-26 04:04:30.710287 | orchestrator | 2026-03-26 04:04:30.710414 | orchestrator | # Ceph quorum status 2026-03-26 04:04:30.710425 | orchestrator | 2026-03-26 04:04:30.710433 | orchestrator | + echo 2026-03-26 04:04:30.710440 | orchestrator | + echo '# Ceph quorum status' 2026-03-26 04:04:30.710448 | orchestrator | + echo 2026-03-26 04:04:30.710462 | orchestrator | + ceph quorum_status 2026-03-26 04:04:30.711376 | orchestrator | + jq 2026-03-26 04:04:31.342582 | orchestrator | { 2026-03-26 04:04:31.342682 | orchestrator | "election_epoch": 8, 2026-03-26 04:04:31.342698 | orchestrator | "quorum": [ 2026-03-26 04:04:31.342711 | orchestrator | 0, 2026-03-26 04:04:31.342722 | orchestrator | 1, 2026-03-26 04:04:31.342782 | orchestrator | 2 2026-03-26 04:04:31.342796 | orchestrator | ], 2026-03-26 04:04:31.342808 | orchestrator | "quorum_names": [ 2026-03-26 04:04:31.342819 | orchestrator | "testbed-node-0", 2026-03-26 04:04:31.342831 | orchestrator | "testbed-node-1", 2026-03-26 04:04:31.342842 | orchestrator | "testbed-node-2" 2026-03-26 04:04:31.342854 | orchestrator | ], 2026-03-26 04:04:31.342865 | orchestrator | "quorum_leader_name": "testbed-node-0", 2026-03-26 04:04:31.342878 | orchestrator | "quorum_age": 4267, 2026-03-26 04:04:31.342890 | orchestrator | "features": { 2026-03-26 04:04:31.342901 | orchestrator | "quorum_con": "4540138322906710015", 2026-03-26 04:04:31.342912 | orchestrator | "quorum_mon": [ 2026-03-26 04:04:31.342923 | orchestrator | "kraken", 2026-03-26 04:04:31.342935 | orchestrator | "luminous", 2026-03-26 04:04:31.342946 | orchestrator | "mimic", 2026-03-26 04:04:31.342957 | orchestrator | "osdmap-prune", 2026-03-26 04:04:31.342968 | orchestrator | "nautilus", 2026-03-26 04:04:31.342979 | orchestrator | "octopus", 2026-03-26 04:04:31.342990 | orchestrator | "pacific", 2026-03-26 04:04:31.343001 | orchestrator | "elector-pinging", 2026-03-26 04:04:31.343011 | orchestrator | "quincy", 2026-03-26 04:04:31.343023 | orchestrator | "reef" 2026-03-26 04:04:31.343042 | orchestrator | ] 2026-03-26 04:04:31.343061 | orchestrator | }, 2026-03-26 04:04:31.343081 | orchestrator | "monmap": { 2026-03-26 04:04:31.343102 | orchestrator | "epoch": 1, 2026-03-26 04:04:31.343122 | orchestrator | "fsid": "11111111-1111-1111-1111-111111111111", 2026-03-26 04:04:31.343143 | orchestrator | "modified": "2026-03-26T02:53:06.287135Z", 2026-03-26 04:04:31.343158 | orchestrator | "created": "2026-03-26T02:53:06.287135Z", 2026-03-26 04:04:31.343170 | orchestrator | "min_mon_release": 18, 2026-03-26 04:04:31.343184 | orchestrator | "min_mon_release_name": "reef", 2026-03-26 04:04:31.343197 | orchestrator | "election_strategy": 1, 2026-03-26 04:04:31.343211 | orchestrator | "disallowed_leaders: ": "", 2026-03-26 04:04:31.343224 | orchestrator | "stretch_mode": false, 2026-03-26 04:04:31.343262 | orchestrator | "tiebreaker_mon": "", 2026-03-26 04:04:31.343276 | orchestrator | "removed_ranks: ": "", 2026-03-26 04:04:31.343288 | orchestrator | "features": { 2026-03-26 04:04:31.343300 | orchestrator | "persistent": [ 2026-03-26 04:04:31.343314 | orchestrator | "kraken", 2026-03-26 04:04:31.343326 | orchestrator | "luminous", 2026-03-26 04:04:31.343339 | orchestrator | "mimic", 2026-03-26 04:04:31.343353 | orchestrator | "osdmap-prune", 2026-03-26 04:04:31.343365 | orchestrator | "nautilus", 2026-03-26 04:04:31.343378 | orchestrator | "octopus", 2026-03-26 04:04:31.343391 | orchestrator | "pacific", 2026-03-26 04:04:31.343404 | orchestrator | "elector-pinging", 2026-03-26 04:04:31.343417 | orchestrator | "quincy", 2026-03-26 04:04:31.343431 | orchestrator | "reef" 2026-03-26 04:04:31.343443 | orchestrator | ], 2026-03-26 04:04:31.343456 | orchestrator | "optional": [] 2026-03-26 04:04:31.343470 | orchestrator | }, 2026-03-26 04:04:31.343482 | orchestrator | "mons": [ 2026-03-26 04:04:31.343495 | orchestrator | { 2026-03-26 04:04:31.343509 | orchestrator | "rank": 0, 2026-03-26 04:04:31.343521 | orchestrator | "name": "testbed-node-0", 2026-03-26 04:04:31.343532 | orchestrator | "public_addrs": { 2026-03-26 04:04:31.343543 | orchestrator | "addrvec": [ 2026-03-26 04:04:31.343554 | orchestrator | { 2026-03-26 04:04:31.343565 | orchestrator | "type": "v2", 2026-03-26 04:04:31.343576 | orchestrator | "addr": "192.168.16.10:3300", 2026-03-26 04:04:31.343587 | orchestrator | "nonce": 0 2026-03-26 04:04:31.343598 | orchestrator | }, 2026-03-26 04:04:31.343609 | orchestrator | { 2026-03-26 04:04:31.343620 | orchestrator | "type": "v1", 2026-03-26 04:04:31.343631 | orchestrator | "addr": "192.168.16.10:6789", 2026-03-26 04:04:31.343642 | orchestrator | "nonce": 0 2026-03-26 04:04:31.343653 | orchestrator | } 2026-03-26 04:04:31.343664 | orchestrator | ] 2026-03-26 04:04:31.343675 | orchestrator | }, 2026-03-26 04:04:31.343686 | orchestrator | "addr": "192.168.16.10:6789/0", 2026-03-26 04:04:31.343697 | orchestrator | "public_addr": "192.168.16.10:6789/0", 2026-03-26 04:04:31.343708 | orchestrator | "priority": 0, 2026-03-26 04:04:31.343719 | orchestrator | "weight": 0, 2026-03-26 04:04:31.343730 | orchestrator | "crush_location": "{}" 2026-03-26 04:04:31.343781 | orchestrator | }, 2026-03-26 04:04:31.343793 | orchestrator | { 2026-03-26 04:04:31.343804 | orchestrator | "rank": 1, 2026-03-26 04:04:31.343815 | orchestrator | "name": "testbed-node-1", 2026-03-26 04:04:31.343826 | orchestrator | "public_addrs": { 2026-03-26 04:04:31.343837 | orchestrator | "addrvec": [ 2026-03-26 04:04:31.343848 | orchestrator | { 2026-03-26 04:04:31.343859 | orchestrator | "type": "v2", 2026-03-26 04:04:31.343870 | orchestrator | "addr": "192.168.16.11:3300", 2026-03-26 04:04:31.343881 | orchestrator | "nonce": 0 2026-03-26 04:04:31.343893 | orchestrator | }, 2026-03-26 04:04:31.343903 | orchestrator | { 2026-03-26 04:04:31.343914 | orchestrator | "type": "v1", 2026-03-26 04:04:31.343926 | orchestrator | "addr": "192.168.16.11:6789", 2026-03-26 04:04:31.343937 | orchestrator | "nonce": 0 2026-03-26 04:04:31.343948 | orchestrator | } 2026-03-26 04:04:31.343959 | orchestrator | ] 2026-03-26 04:04:31.343970 | orchestrator | }, 2026-03-26 04:04:31.343981 | orchestrator | "addr": "192.168.16.11:6789/0", 2026-03-26 04:04:31.343992 | orchestrator | "public_addr": "192.168.16.11:6789/0", 2026-03-26 04:04:31.344003 | orchestrator | "priority": 0, 2026-03-26 04:04:31.344014 | orchestrator | "weight": 0, 2026-03-26 04:04:31.344025 | orchestrator | "crush_location": "{}" 2026-03-26 04:04:31.344036 | orchestrator | }, 2026-03-26 04:04:31.344047 | orchestrator | { 2026-03-26 04:04:31.344058 | orchestrator | "rank": 2, 2026-03-26 04:04:31.344073 | orchestrator | "name": "testbed-node-2", 2026-03-26 04:04:31.344093 | orchestrator | "public_addrs": { 2026-03-26 04:04:31.344113 | orchestrator | "addrvec": [ 2026-03-26 04:04:31.344134 | orchestrator | { 2026-03-26 04:04:31.344155 | orchestrator | "type": "v2", 2026-03-26 04:04:31.344174 | orchestrator | "addr": "192.168.16.12:3300", 2026-03-26 04:04:31.344191 | orchestrator | "nonce": 0 2026-03-26 04:04:31.344202 | orchestrator | }, 2026-03-26 04:04:31.344213 | orchestrator | { 2026-03-26 04:04:31.344224 | orchestrator | "type": "v1", 2026-03-26 04:04:31.344235 | orchestrator | "addr": "192.168.16.12:6789", 2026-03-26 04:04:31.344246 | orchestrator | "nonce": 0 2026-03-26 04:04:31.344270 | orchestrator | } 2026-03-26 04:04:31.344281 | orchestrator | ] 2026-03-26 04:04:31.344292 | orchestrator | }, 2026-03-26 04:04:31.344303 | orchestrator | "addr": "192.168.16.12:6789/0", 2026-03-26 04:04:31.344314 | orchestrator | "public_addr": "192.168.16.12:6789/0", 2026-03-26 04:04:31.344324 | orchestrator | "priority": 0, 2026-03-26 04:04:31.344350 | orchestrator | "weight": 0, 2026-03-26 04:04:31.344362 | orchestrator | "crush_location": "{}" 2026-03-26 04:04:31.344373 | orchestrator | } 2026-03-26 04:04:31.344384 | orchestrator | ] 2026-03-26 04:04:31.344395 | orchestrator | } 2026-03-26 04:04:31.344406 | orchestrator | } 2026-03-26 04:04:31.344417 | orchestrator | 2026-03-26 04:04:31.344428 | orchestrator | # Ceph free space status 2026-03-26 04:04:31.344439 | orchestrator | + echo 2026-03-26 04:04:31.344450 | orchestrator | + echo '# Ceph free space status' 2026-03-26 04:04:31.344461 | orchestrator | 2026-03-26 04:04:31.344473 | orchestrator | + echo 2026-03-26 04:04:31.344484 | orchestrator | + ceph df 2026-03-26 04:04:31.965069 | orchestrator | --- RAW STORAGE --- 2026-03-26 04:04:31.965176 | orchestrator | CLASS SIZE AVAIL USED RAW USED %RAW USED 2026-03-26 04:04:31.965203 | orchestrator | hdd 120 GiB 113 GiB 7.0 GiB 7.0 GiB 5.87 2026-03-26 04:04:31.965226 | orchestrator | TOTAL 120 GiB 113 GiB 7.0 GiB 7.0 GiB 5.87 2026-03-26 04:04:31.965247 | orchestrator | 2026-03-26 04:04:31.965268 | orchestrator | --- POOLS --- 2026-03-26 04:04:31.965287 | orchestrator | POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL 2026-03-26 04:04:31.965307 | orchestrator | .mgr 1 1 577 KiB 2 1.1 MiB 0 52 GiB 2026-03-26 04:04:31.965327 | orchestrator | cephfs_data 2 32 0 B 0 0 B 0 35 GiB 2026-03-26 04:04:31.965345 | orchestrator | cephfs_metadata 3 16 4.4 KiB 22 96 KiB 0 35 GiB 2026-03-26 04:04:31.965366 | orchestrator | default.rgw.buckets.data 4 32 0 B 0 0 B 0 35 GiB 2026-03-26 04:04:31.965387 | orchestrator | default.rgw.buckets.index 5 32 0 B 0 0 B 0 35 GiB 2026-03-26 04:04:31.965406 | orchestrator | default.rgw.control 6 32 0 B 8 0 B 0 35 GiB 2026-03-26 04:04:31.965425 | orchestrator | default.rgw.log 7 32 3.6 KiB 209 408 KiB 0 35 GiB 2026-03-26 04:04:31.965442 | orchestrator | default.rgw.meta 8 32 0 B 0 0 B 0 35 GiB 2026-03-26 04:04:31.965461 | orchestrator | .rgw.root 9 32 3.9 KiB 8 64 KiB 0 52 GiB 2026-03-26 04:04:31.965480 | orchestrator | backups 10 32 19 B 2 12 KiB 0 35 GiB 2026-03-26 04:04:31.965500 | orchestrator | volumes 11 32 19 B 2 12 KiB 0 35 GiB 2026-03-26 04:04:31.965520 | orchestrator | images 12 32 2.2 GiB 299 6.7 GiB 5.98 35 GiB 2026-03-26 04:04:31.965540 | orchestrator | metrics 13 32 19 B 2 12 KiB 0 35 GiB 2026-03-26 04:04:31.965559 | orchestrator | vms 14 32 19 B 2 12 KiB 0 35 GiB 2026-03-26 04:04:32.017577 | orchestrator | ++ semver 9.5.0 5.0.0 2026-03-26 04:04:32.082250 | orchestrator | + [[ 1 -eq -1 ]] 2026-03-26 04:04:32.082324 | orchestrator | + [[ ! -e /etc/redhat-release ]] 2026-03-26 04:04:32.082333 | orchestrator | + osism apply facts 2026-03-26 04:04:34.123243 | orchestrator | 2026-03-26 04:04:34 | INFO  | Task a8207bc9-b917-4707-81f1-417025aac495 (facts) was prepared for execution. 2026-03-26 04:04:34.123331 | orchestrator | 2026-03-26 04:04:34 | INFO  | It takes a moment until task a8207bc9-b917-4707-81f1-417025aac495 (facts) has been started and output is visible here. 2026-03-26 04:04:47.751572 | orchestrator | 2026-03-26 04:04:47.751680 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-03-26 04:04:47.751698 | orchestrator | 2026-03-26 04:04:47.751710 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-26 04:04:47.751722 | orchestrator | Thursday 26 March 2026 04:04:38 +0000 (0:00:00.296) 0:00:00.296 ******** 2026-03-26 04:04:47.751777 | orchestrator | ok: [testbed-manager] 2026-03-26 04:04:47.751792 | orchestrator | ok: [testbed-node-1] 2026-03-26 04:04:47.751803 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:04:47.751814 | orchestrator | ok: [testbed-node-2] 2026-03-26 04:04:47.751833 | orchestrator | ok: [testbed-node-3] 2026-03-26 04:04:47.751894 | orchestrator | ok: [testbed-node-4] 2026-03-26 04:04:47.751913 | orchestrator | ok: [testbed-node-5] 2026-03-26 04:04:47.751932 | orchestrator | 2026-03-26 04:04:47.751950 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-26 04:04:47.751971 | orchestrator | Thursday 26 March 2026 04:04:39 +0000 (0:00:01.129) 0:00:01.425 ******** 2026-03-26 04:04:47.751989 | orchestrator | skipping: [testbed-manager] 2026-03-26 04:04:47.752012 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:04:47.752023 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:04:47.752034 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:04:47.752045 | orchestrator | skipping: [testbed-node-3] 2026-03-26 04:04:47.752057 | orchestrator | skipping: [testbed-node-4] 2026-03-26 04:04:47.752067 | orchestrator | skipping: [testbed-node-5] 2026-03-26 04:04:47.752136 | orchestrator | 2026-03-26 04:04:47.752149 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-26 04:04:47.752162 | orchestrator | 2026-03-26 04:04:47.752175 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-26 04:04:47.752188 | orchestrator | Thursday 26 March 2026 04:04:41 +0000 (0:00:01.360) 0:00:02.785 ******** 2026-03-26 04:04:47.752200 | orchestrator | ok: [testbed-node-1] 2026-03-26 04:04:47.752213 | orchestrator | ok: [testbed-node-2] 2026-03-26 04:04:47.752226 | orchestrator | ok: [testbed-manager] 2026-03-26 04:04:47.752238 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:04:47.752251 | orchestrator | ok: [testbed-node-3] 2026-03-26 04:04:47.752263 | orchestrator | ok: [testbed-node-4] 2026-03-26 04:04:47.752275 | orchestrator | ok: [testbed-node-5] 2026-03-26 04:04:47.752288 | orchestrator | 2026-03-26 04:04:47.752301 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-26 04:04:47.752315 | orchestrator | 2026-03-26 04:04:47.752327 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-26 04:04:47.752341 | orchestrator | Thursday 26 March 2026 04:04:46 +0000 (0:00:05.714) 0:00:08.500 ******** 2026-03-26 04:04:47.752353 | orchestrator | skipping: [testbed-manager] 2026-03-26 04:04:47.752366 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:04:47.752379 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:04:47.752391 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:04:47.752403 | orchestrator | skipping: [testbed-node-3] 2026-03-26 04:04:47.752416 | orchestrator | skipping: [testbed-node-4] 2026-03-26 04:04:47.752428 | orchestrator | skipping: [testbed-node-5] 2026-03-26 04:04:47.752441 | orchestrator | 2026-03-26 04:04:47.752453 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-26 04:04:47.752468 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-26 04:04:47.752482 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-26 04:04:47.752493 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-26 04:04:47.752504 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-26 04:04:47.752515 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-26 04:04:47.752526 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-26 04:04:47.752537 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-26 04:04:47.752548 | orchestrator | 2026-03-26 04:04:47.752559 | orchestrator | 2026-03-26 04:04:47.752570 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-26 04:04:47.752593 | orchestrator | Thursday 26 March 2026 04:04:47 +0000 (0:00:00.589) 0:00:09.089 ******** 2026-03-26 04:04:47.752604 | orchestrator | =============================================================================== 2026-03-26 04:04:47.752615 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.71s 2026-03-26 04:04:47.752626 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.36s 2026-03-26 04:04:47.752637 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.13s 2026-03-26 04:04:47.752648 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.59s 2026-03-26 04:04:48.055540 | orchestrator | + osism validate ceph-mons 2026-03-26 04:05:20.519780 | orchestrator | 2026-03-26 04:05:20.519889 | orchestrator | PLAY [Ceph validate mons] ****************************************************** 2026-03-26 04:05:20.519908 | orchestrator | 2026-03-26 04:05:20.519920 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-03-26 04:05:20.519949 | orchestrator | Thursday 26 March 2026 04:05:04 +0000 (0:00:00.452) 0:00:00.452 ******** 2026-03-26 04:05:20.519962 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-26 04:05:20.519974 | orchestrator | 2026-03-26 04:05:20.519986 | orchestrator | TASK [Create report output directory] ****************************************** 2026-03-26 04:05:20.519997 | orchestrator | Thursday 26 March 2026 04:05:05 +0000 (0:00:00.853) 0:00:01.306 ******** 2026-03-26 04:05:20.520008 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-26 04:05:20.520019 | orchestrator | 2026-03-26 04:05:20.520031 | orchestrator | TASK [Define report vars] ****************************************************** 2026-03-26 04:05:20.520042 | orchestrator | Thursday 26 March 2026 04:05:06 +0000 (0:00:00.812) 0:00:02.118 ******** 2026-03-26 04:05:20.520054 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:05:20.520066 | orchestrator | 2026-03-26 04:05:20.520077 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-03-26 04:05:20.520088 | orchestrator | Thursday 26 March 2026 04:05:06 +0000 (0:00:00.115) 0:00:02.234 ******** 2026-03-26 04:05:20.520100 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:05:20.520111 | orchestrator | ok: [testbed-node-1] 2026-03-26 04:05:20.520122 | orchestrator | ok: [testbed-node-2] 2026-03-26 04:05:20.520133 | orchestrator | 2026-03-26 04:05:20.520144 | orchestrator | TASK [Get container info] ****************************************************** 2026-03-26 04:05:20.520156 | orchestrator | Thursday 26 March 2026 04:05:06 +0000 (0:00:00.259) 0:00:02.494 ******** 2026-03-26 04:05:20.520167 | orchestrator | ok: [testbed-node-1] 2026-03-26 04:05:20.520178 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:05:20.520189 | orchestrator | ok: [testbed-node-2] 2026-03-26 04:05:20.520200 | orchestrator | 2026-03-26 04:05:20.520211 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-03-26 04:05:20.520222 | orchestrator | Thursday 26 March 2026 04:05:07 +0000 (0:00:01.012) 0:00:03.506 ******** 2026-03-26 04:05:20.520234 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:05:20.520246 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:05:20.520257 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:05:20.520270 | orchestrator | 2026-03-26 04:05:20.520283 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-03-26 04:05:20.520296 | orchestrator | Thursday 26 March 2026 04:05:08 +0000 (0:00:00.267) 0:00:03.774 ******** 2026-03-26 04:05:20.520309 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:05:20.520322 | orchestrator | ok: [testbed-node-1] 2026-03-26 04:05:20.520335 | orchestrator | ok: [testbed-node-2] 2026-03-26 04:05:20.520348 | orchestrator | 2026-03-26 04:05:20.520361 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-26 04:05:20.520374 | orchestrator | Thursday 26 March 2026 04:05:08 +0000 (0:00:00.407) 0:00:04.181 ******** 2026-03-26 04:05:20.520388 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:05:20.520400 | orchestrator | ok: [testbed-node-1] 2026-03-26 04:05:20.520413 | orchestrator | ok: [testbed-node-2] 2026-03-26 04:05:20.520425 | orchestrator | 2026-03-26 04:05:20.520460 | orchestrator | TASK [Set test result to failed if ceph-mon is not running] ******************** 2026-03-26 04:05:20.520474 | orchestrator | Thursday 26 March 2026 04:05:08 +0000 (0:00:00.299) 0:00:04.480 ******** 2026-03-26 04:05:20.520488 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:05:20.520500 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:05:20.520513 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:05:20.520527 | orchestrator | 2026-03-26 04:05:20.520539 | orchestrator | TASK [Set test result to passed if ceph-mon is running] ************************ 2026-03-26 04:05:20.520552 | orchestrator | Thursday 26 March 2026 04:05:09 +0000 (0:00:00.289) 0:00:04.770 ******** 2026-03-26 04:05:20.520579 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:05:20.520592 | orchestrator | ok: [testbed-node-1] 2026-03-26 04:05:20.520616 | orchestrator | ok: [testbed-node-2] 2026-03-26 04:05:20.520628 | orchestrator | 2026-03-26 04:05:20.520644 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-03-26 04:05:20.520656 | orchestrator | Thursday 26 March 2026 04:05:09 +0000 (0:00:00.426) 0:00:05.196 ******** 2026-03-26 04:05:20.520667 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:05:20.520678 | orchestrator | 2026-03-26 04:05:20.520689 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-03-26 04:05:20.520700 | orchestrator | Thursday 26 March 2026 04:05:09 +0000 (0:00:00.225) 0:00:05.421 ******** 2026-03-26 04:05:20.520711 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:05:20.520722 | orchestrator | 2026-03-26 04:05:20.520733 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-03-26 04:05:20.520760 | orchestrator | Thursday 26 March 2026 04:05:09 +0000 (0:00:00.233) 0:00:05.655 ******** 2026-03-26 04:05:20.520771 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:05:20.520782 | orchestrator | 2026-03-26 04:05:20.520793 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-26 04:05:20.520804 | orchestrator | Thursday 26 March 2026 04:05:10 +0000 (0:00:00.261) 0:00:05.916 ******** 2026-03-26 04:05:20.520816 | orchestrator | 2026-03-26 04:05:20.520827 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-26 04:05:20.520838 | orchestrator | Thursday 26 March 2026 04:05:10 +0000 (0:00:00.068) 0:00:05.985 ******** 2026-03-26 04:05:20.520849 | orchestrator | 2026-03-26 04:05:20.520860 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-26 04:05:20.520871 | orchestrator | Thursday 26 March 2026 04:05:10 +0000 (0:00:00.071) 0:00:06.056 ******** 2026-03-26 04:05:20.520882 | orchestrator | 2026-03-26 04:05:20.520893 | orchestrator | TASK [Print report file information] ******************************************* 2026-03-26 04:05:20.520904 | orchestrator | Thursday 26 March 2026 04:05:10 +0000 (0:00:00.071) 0:00:06.127 ******** 2026-03-26 04:05:20.520915 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:05:20.520926 | orchestrator | 2026-03-26 04:05:20.520937 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-03-26 04:05:20.520948 | orchestrator | Thursday 26 March 2026 04:05:10 +0000 (0:00:00.225) 0:00:06.353 ******** 2026-03-26 04:05:20.520959 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:05:20.520970 | orchestrator | 2026-03-26 04:05:20.520999 | orchestrator | TASK [Prepare quorum test vars] ************************************************ 2026-03-26 04:05:20.521011 | orchestrator | Thursday 26 March 2026 04:05:10 +0000 (0:00:00.234) 0:00:06.588 ******** 2026-03-26 04:05:20.521022 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:05:20.521033 | orchestrator | 2026-03-26 04:05:20.521044 | orchestrator | TASK [Get monmap info from one mon container] ********************************** 2026-03-26 04:05:20.521055 | orchestrator | Thursday 26 March 2026 04:05:11 +0000 (0:00:00.115) 0:00:06.703 ******** 2026-03-26 04:05:20.521066 | orchestrator | changed: [testbed-node-0] 2026-03-26 04:05:20.521081 | orchestrator | 2026-03-26 04:05:20.521093 | orchestrator | TASK [Set quorum test data] **************************************************** 2026-03-26 04:05:20.521104 | orchestrator | Thursday 26 March 2026 04:05:12 +0000 (0:00:01.639) 0:00:08.343 ******** 2026-03-26 04:05:20.521115 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:05:20.521135 | orchestrator | 2026-03-26 04:05:20.521146 | orchestrator | TASK [Fail quorum test if not all monitors are in quorum] ********************** 2026-03-26 04:05:20.521157 | orchestrator | Thursday 26 March 2026 04:05:13 +0000 (0:00:00.528) 0:00:08.871 ******** 2026-03-26 04:05:20.521168 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:05:20.521179 | orchestrator | 2026-03-26 04:05:20.521190 | orchestrator | TASK [Pass quorum test if all monitors are in quorum] ************************** 2026-03-26 04:05:20.521201 | orchestrator | Thursday 26 March 2026 04:05:13 +0000 (0:00:00.133) 0:00:09.004 ******** 2026-03-26 04:05:20.521212 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:05:20.521223 | orchestrator | 2026-03-26 04:05:20.521234 | orchestrator | TASK [Set fsid test vars] ****************************************************** 2026-03-26 04:05:20.521246 | orchestrator | Thursday 26 March 2026 04:05:13 +0000 (0:00:00.333) 0:00:09.338 ******** 2026-03-26 04:05:20.521257 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:05:20.521267 | orchestrator | 2026-03-26 04:05:20.521279 | orchestrator | TASK [Fail Cluster FSID test if FSID does not match configuration] ************* 2026-03-26 04:05:20.521290 | orchestrator | Thursday 26 March 2026 04:05:13 +0000 (0:00:00.320) 0:00:09.659 ******** 2026-03-26 04:05:20.521300 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:05:20.521311 | orchestrator | 2026-03-26 04:05:20.521322 | orchestrator | TASK [Pass Cluster FSID test if it matches configuration] ********************** 2026-03-26 04:05:20.521333 | orchestrator | Thursday 26 March 2026 04:05:14 +0000 (0:00:00.128) 0:00:09.787 ******** 2026-03-26 04:05:20.521344 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:05:20.521355 | orchestrator | 2026-03-26 04:05:20.521366 | orchestrator | TASK [Prepare status test vars] ************************************************ 2026-03-26 04:05:20.521377 | orchestrator | Thursday 26 March 2026 04:05:14 +0000 (0:00:00.116) 0:00:09.904 ******** 2026-03-26 04:05:20.521388 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:05:20.521399 | orchestrator | 2026-03-26 04:05:20.521410 | orchestrator | TASK [Gather status data] ****************************************************** 2026-03-26 04:05:20.521421 | orchestrator | Thursday 26 March 2026 04:05:14 +0000 (0:00:00.168) 0:00:10.073 ******** 2026-03-26 04:05:20.521432 | orchestrator | changed: [testbed-node-0] 2026-03-26 04:05:20.521443 | orchestrator | 2026-03-26 04:05:20.521454 | orchestrator | TASK [Set health test data] **************************************************** 2026-03-26 04:05:20.521465 | orchestrator | Thursday 26 March 2026 04:05:15 +0000 (0:00:01.416) 0:00:11.490 ******** 2026-03-26 04:05:20.521476 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:05:20.521487 | orchestrator | 2026-03-26 04:05:20.521498 | orchestrator | TASK [Fail cluster-health if health is not acceptable] ************************* 2026-03-26 04:05:20.521509 | orchestrator | Thursday 26 March 2026 04:05:16 +0000 (0:00:00.313) 0:00:11.803 ******** 2026-03-26 04:05:20.521520 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:05:20.521531 | orchestrator | 2026-03-26 04:05:20.521542 | orchestrator | TASK [Pass cluster-health if health is acceptable] ***************************** 2026-03-26 04:05:20.521553 | orchestrator | Thursday 26 March 2026 04:05:16 +0000 (0:00:00.143) 0:00:11.947 ******** 2026-03-26 04:05:20.521564 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:05:20.521575 | orchestrator | 2026-03-26 04:05:20.521586 | orchestrator | TASK [Fail cluster-health if health is not acceptable (strict)] **************** 2026-03-26 04:05:20.521603 | orchestrator | Thursday 26 March 2026 04:05:16 +0000 (0:00:00.165) 0:00:12.112 ******** 2026-03-26 04:05:20.521614 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:05:20.521625 | orchestrator | 2026-03-26 04:05:20.521637 | orchestrator | TASK [Pass cluster-health if status is OK (strict)] **************************** 2026-03-26 04:05:20.521648 | orchestrator | Thursday 26 March 2026 04:05:16 +0000 (0:00:00.142) 0:00:12.255 ******** 2026-03-26 04:05:20.521659 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:05:20.521670 | orchestrator | 2026-03-26 04:05:20.521681 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-03-26 04:05:20.521692 | orchestrator | Thursday 26 March 2026 04:05:16 +0000 (0:00:00.384) 0:00:12.640 ******** 2026-03-26 04:05:20.521703 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-26 04:05:20.521720 | orchestrator | 2026-03-26 04:05:20.521731 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-03-26 04:05:20.521762 | orchestrator | Thursday 26 March 2026 04:05:17 +0000 (0:00:00.310) 0:00:12.950 ******** 2026-03-26 04:05:20.521773 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:05:20.521784 | orchestrator | 2026-03-26 04:05:20.521795 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-03-26 04:05:20.521806 | orchestrator | Thursday 26 March 2026 04:05:17 +0000 (0:00:00.281) 0:00:13.231 ******** 2026-03-26 04:05:20.521817 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-26 04:05:20.521829 | orchestrator | 2026-03-26 04:05:20.521840 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-03-26 04:05:20.521851 | orchestrator | Thursday 26 March 2026 04:05:19 +0000 (0:00:02.146) 0:00:15.378 ******** 2026-03-26 04:05:20.521862 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-26 04:05:20.521873 | orchestrator | 2026-03-26 04:05:20.521884 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-03-26 04:05:20.521895 | orchestrator | Thursday 26 March 2026 04:05:19 +0000 (0:00:00.278) 0:00:15.657 ******** 2026-03-26 04:05:20.521906 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-26 04:05:20.521917 | orchestrator | 2026-03-26 04:05:20.521936 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-26 04:05:23.387376 | orchestrator | Thursday 26 March 2026 04:05:20 +0000 (0:00:00.270) 0:00:15.927 ******** 2026-03-26 04:05:23.387482 | orchestrator | 2026-03-26 04:05:23.387500 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-26 04:05:23.387512 | orchestrator | Thursday 26 March 2026 04:05:20 +0000 (0:00:00.079) 0:00:16.007 ******** 2026-03-26 04:05:23.387523 | orchestrator | 2026-03-26 04:05:23.387537 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-26 04:05:23.387551 | orchestrator | Thursday 26 March 2026 04:05:20 +0000 (0:00:00.100) 0:00:16.107 ******** 2026-03-26 04:05:23.387570 | orchestrator | 2026-03-26 04:05:23.387588 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-03-26 04:05:23.387606 | orchestrator | Thursday 26 March 2026 04:05:20 +0000 (0:00:00.084) 0:00:16.191 ******** 2026-03-26 04:05:23.387624 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-26 04:05:23.387643 | orchestrator | 2026-03-26 04:05:23.387663 | orchestrator | TASK [Print report file information] ******************************************* 2026-03-26 04:05:23.387678 | orchestrator | Thursday 26 March 2026 04:05:22 +0000 (0:00:01.618) 0:00:17.809 ******** 2026-03-26 04:05:23.387689 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-03-26 04:05:23.387700 | orchestrator |  "msg": [ 2026-03-26 04:05:23.387714 | orchestrator |  "Validator run completed.", 2026-03-26 04:05:23.387726 | orchestrator |  "You can find the report file here:", 2026-03-26 04:05:23.387798 | orchestrator |  "/opt/reports/validator/ceph-mons-validator-2026-03-26T04:05:05+00:00-report.json", 2026-03-26 04:05:23.387811 | orchestrator |  "on the following host:", 2026-03-26 04:05:23.387823 | orchestrator |  "testbed-manager" 2026-03-26 04:05:23.387835 | orchestrator |  ] 2026-03-26 04:05:23.387846 | orchestrator | } 2026-03-26 04:05:23.387858 | orchestrator | 2026-03-26 04:05:23.387870 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-26 04:05:23.387883 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-03-26 04:05:23.387896 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-26 04:05:23.387908 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-26 04:05:23.387947 | orchestrator | 2026-03-26 04:05:23.387960 | orchestrator | 2026-03-26 04:05:23.387974 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-26 04:05:23.387987 | orchestrator | Thursday 26 March 2026 04:05:23 +0000 (0:00:00.917) 0:00:18.727 ******** 2026-03-26 04:05:23.388000 | orchestrator | =============================================================================== 2026-03-26 04:05:23.388014 | orchestrator | Aggregate test results step one ----------------------------------------- 2.15s 2026-03-26 04:05:23.388027 | orchestrator | Get monmap info from one mon container ---------------------------------- 1.64s 2026-03-26 04:05:23.388040 | orchestrator | Write report file ------------------------------------------------------- 1.62s 2026-03-26 04:05:23.388052 | orchestrator | Gather status data ------------------------------------------------------ 1.42s 2026-03-26 04:05:23.388065 | orchestrator | Get container info ------------------------------------------------------ 1.01s 2026-03-26 04:05:23.388079 | orchestrator | Print report file information ------------------------------------------- 0.92s 2026-03-26 04:05:23.388092 | orchestrator | Get timestamp for report file ------------------------------------------- 0.85s 2026-03-26 04:05:23.388105 | orchestrator | Create report output directory ------------------------------------------ 0.81s 2026-03-26 04:05:23.388118 | orchestrator | Set quorum test data ---------------------------------------------------- 0.53s 2026-03-26 04:05:23.388131 | orchestrator | Set test result to passed if ceph-mon is running ------------------------ 0.43s 2026-03-26 04:05:23.388144 | orchestrator | Set test result to passed if container is existing ---------------------- 0.41s 2026-03-26 04:05:23.388157 | orchestrator | Pass cluster-health if status is OK (strict) ---------------------------- 0.38s 2026-03-26 04:05:23.388170 | orchestrator | Pass quorum test if all monitors are in quorum -------------------------- 0.33s 2026-03-26 04:05:23.388183 | orchestrator | Set fsid test vars ------------------------------------------------------ 0.32s 2026-03-26 04:05:23.388196 | orchestrator | Set health test data ---------------------------------------------------- 0.31s 2026-03-26 04:05:23.388209 | orchestrator | Set validation result to passed if no test failed ----------------------- 0.31s 2026-03-26 04:05:23.388222 | orchestrator | Prepare test data ------------------------------------------------------- 0.30s 2026-03-26 04:05:23.388235 | orchestrator | Set test result to failed if ceph-mon is not running -------------------- 0.29s 2026-03-26 04:05:23.388248 | orchestrator | Set validation result to failed if a test failed ------------------------ 0.28s 2026-03-26 04:05:23.388261 | orchestrator | Aggregate test results step two ----------------------------------------- 0.28s 2026-03-26 04:05:23.693515 | orchestrator | + osism validate ceph-mgrs 2026-03-26 04:05:55.062583 | orchestrator | 2026-03-26 04:05:55.062695 | orchestrator | PLAY [Ceph validate mgrs] ****************************************************** 2026-03-26 04:05:55.062712 | orchestrator | 2026-03-26 04:05:55.062725 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-03-26 04:05:55.062765 | orchestrator | Thursday 26 March 2026 04:05:40 +0000 (0:00:00.468) 0:00:00.468 ******** 2026-03-26 04:05:55.062779 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-26 04:05:55.062791 | orchestrator | 2026-03-26 04:05:55.062803 | orchestrator | TASK [Create report output directory] ****************************************** 2026-03-26 04:05:55.062814 | orchestrator | Thursday 26 March 2026 04:05:41 +0000 (0:00:00.832) 0:00:01.301 ******** 2026-03-26 04:05:55.062843 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-26 04:05:55.062855 | orchestrator | 2026-03-26 04:05:55.062867 | orchestrator | TASK [Define report vars] ****************************************************** 2026-03-26 04:05:55.062878 | orchestrator | Thursday 26 March 2026 04:05:42 +0000 (0:00:00.980) 0:00:02.281 ******** 2026-03-26 04:05:55.062889 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:05:55.062903 | orchestrator | 2026-03-26 04:05:55.062914 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-03-26 04:05:55.062925 | orchestrator | Thursday 26 March 2026 04:05:42 +0000 (0:00:00.146) 0:00:02.427 ******** 2026-03-26 04:05:55.062936 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:05:55.062967 | orchestrator | ok: [testbed-node-1] 2026-03-26 04:05:55.062978 | orchestrator | ok: [testbed-node-2] 2026-03-26 04:05:55.063032 | orchestrator | 2026-03-26 04:05:55.063044 | orchestrator | TASK [Get container info] ****************************************************** 2026-03-26 04:05:55.063055 | orchestrator | Thursday 26 March 2026 04:05:42 +0000 (0:00:00.311) 0:00:02.739 ******** 2026-03-26 04:05:55.063066 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:05:55.063078 | orchestrator | ok: [testbed-node-2] 2026-03-26 04:05:55.063089 | orchestrator | ok: [testbed-node-1] 2026-03-26 04:05:55.063099 | orchestrator | 2026-03-26 04:05:55.063110 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-03-26 04:05:55.063123 | orchestrator | Thursday 26 March 2026 04:05:43 +0000 (0:00:01.008) 0:00:03.748 ******** 2026-03-26 04:05:55.063137 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:05:55.063150 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:05:55.063164 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:05:55.063177 | orchestrator | 2026-03-26 04:05:55.063190 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-03-26 04:05:55.063204 | orchestrator | Thursday 26 March 2026 04:05:44 +0000 (0:00:00.315) 0:00:04.064 ******** 2026-03-26 04:05:55.063218 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:05:55.063231 | orchestrator | ok: [testbed-node-1] 2026-03-26 04:05:55.063245 | orchestrator | ok: [testbed-node-2] 2026-03-26 04:05:55.063258 | orchestrator | 2026-03-26 04:05:55.063272 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-26 04:05:55.063285 | orchestrator | Thursday 26 March 2026 04:05:44 +0000 (0:00:00.539) 0:00:04.603 ******** 2026-03-26 04:05:55.063299 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:05:55.063311 | orchestrator | ok: [testbed-node-1] 2026-03-26 04:05:55.063324 | orchestrator | ok: [testbed-node-2] 2026-03-26 04:05:55.063337 | orchestrator | 2026-03-26 04:05:55.063350 | orchestrator | TASK [Set test result to failed if ceph-mgr is not running] ******************** 2026-03-26 04:05:55.063363 | orchestrator | Thursday 26 March 2026 04:05:44 +0000 (0:00:00.340) 0:00:04.943 ******** 2026-03-26 04:05:55.063376 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:05:55.063389 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:05:55.063403 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:05:55.063417 | orchestrator | 2026-03-26 04:05:55.063436 | orchestrator | TASK [Set test result to passed if ceph-mgr is running] ************************ 2026-03-26 04:05:55.063457 | orchestrator | Thursday 26 March 2026 04:05:45 +0000 (0:00:00.299) 0:00:05.242 ******** 2026-03-26 04:05:55.063478 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:05:55.063497 | orchestrator | ok: [testbed-node-1] 2026-03-26 04:05:55.063516 | orchestrator | ok: [testbed-node-2] 2026-03-26 04:05:55.063534 | orchestrator | 2026-03-26 04:05:55.063552 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-03-26 04:05:55.063572 | orchestrator | Thursday 26 March 2026 04:05:45 +0000 (0:00:00.544) 0:00:05.787 ******** 2026-03-26 04:05:55.063589 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:05:55.063600 | orchestrator | 2026-03-26 04:05:55.063611 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-03-26 04:05:55.063622 | orchestrator | Thursday 26 March 2026 04:05:46 +0000 (0:00:00.247) 0:00:06.035 ******** 2026-03-26 04:05:55.063633 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:05:55.063644 | orchestrator | 2026-03-26 04:05:55.063662 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-03-26 04:05:55.063673 | orchestrator | Thursday 26 March 2026 04:05:46 +0000 (0:00:00.249) 0:00:06.285 ******** 2026-03-26 04:05:55.063685 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:05:55.063695 | orchestrator | 2026-03-26 04:05:55.063707 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-26 04:05:55.063718 | orchestrator | Thursday 26 March 2026 04:05:46 +0000 (0:00:00.246) 0:00:06.532 ******** 2026-03-26 04:05:55.063729 | orchestrator | 2026-03-26 04:05:55.063761 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-26 04:05:55.063784 | orchestrator | Thursday 26 March 2026 04:05:46 +0000 (0:00:00.070) 0:00:06.602 ******** 2026-03-26 04:05:55.063795 | orchestrator | 2026-03-26 04:05:55.063806 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-26 04:05:55.063817 | orchestrator | Thursday 26 March 2026 04:05:46 +0000 (0:00:00.071) 0:00:06.674 ******** 2026-03-26 04:05:55.063828 | orchestrator | 2026-03-26 04:05:55.063839 | orchestrator | TASK [Print report file information] ******************************************* 2026-03-26 04:05:55.063849 | orchestrator | Thursday 26 March 2026 04:05:46 +0000 (0:00:00.075) 0:00:06.750 ******** 2026-03-26 04:05:55.063860 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:05:55.063871 | orchestrator | 2026-03-26 04:05:55.063882 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-03-26 04:05:55.063893 | orchestrator | Thursday 26 March 2026 04:05:46 +0000 (0:00:00.248) 0:00:06.999 ******** 2026-03-26 04:05:55.063904 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:05:55.063915 | orchestrator | 2026-03-26 04:05:55.063946 | orchestrator | TASK [Define mgr module test vars] ********************************************* 2026-03-26 04:05:55.063958 | orchestrator | Thursday 26 March 2026 04:05:47 +0000 (0:00:00.245) 0:00:07.245 ******** 2026-03-26 04:05:55.063968 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:05:55.063979 | orchestrator | 2026-03-26 04:05:55.063991 | orchestrator | TASK [Gather list of mgr modules] ********************************************** 2026-03-26 04:05:55.064002 | orchestrator | Thursday 26 March 2026 04:05:47 +0000 (0:00:00.118) 0:00:07.363 ******** 2026-03-26 04:05:55.064012 | orchestrator | changed: [testbed-node-0] 2026-03-26 04:05:55.064023 | orchestrator | 2026-03-26 04:05:55.064034 | orchestrator | TASK [Parse mgr module list from json] ***************************************** 2026-03-26 04:05:55.064045 | orchestrator | Thursday 26 March 2026 04:05:49 +0000 (0:00:02.082) 0:00:09.446 ******** 2026-03-26 04:05:55.064056 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:05:55.064067 | orchestrator | 2026-03-26 04:05:55.064078 | orchestrator | TASK [Extract list of enabled mgr modules] ************************************* 2026-03-26 04:05:55.064089 | orchestrator | Thursday 26 March 2026 04:05:49 +0000 (0:00:00.462) 0:00:09.908 ******** 2026-03-26 04:05:55.064099 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:05:55.064110 | orchestrator | 2026-03-26 04:05:55.064121 | orchestrator | TASK [Fail test if mgr modules are disabled that should be enabled] ************ 2026-03-26 04:05:55.064132 | orchestrator | Thursday 26 March 2026 04:05:50 +0000 (0:00:00.323) 0:00:10.231 ******** 2026-03-26 04:05:55.064143 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:05:55.064153 | orchestrator | 2026-03-26 04:05:55.064165 | orchestrator | TASK [Pass test if required mgr modules are enabled] *************************** 2026-03-26 04:05:55.064175 | orchestrator | Thursday 26 March 2026 04:05:50 +0000 (0:00:00.194) 0:00:10.425 ******** 2026-03-26 04:05:55.064186 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:05:55.064197 | orchestrator | 2026-03-26 04:05:55.064208 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-03-26 04:05:55.064219 | orchestrator | Thursday 26 March 2026 04:05:50 +0000 (0:00:00.143) 0:00:10.569 ******** 2026-03-26 04:05:55.064230 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-26 04:05:55.064241 | orchestrator | 2026-03-26 04:05:55.064252 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-03-26 04:05:55.064262 | orchestrator | Thursday 26 March 2026 04:05:50 +0000 (0:00:00.263) 0:00:10.832 ******** 2026-03-26 04:05:55.064273 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:05:55.064284 | orchestrator | 2026-03-26 04:05:55.064295 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-03-26 04:05:55.064306 | orchestrator | Thursday 26 March 2026 04:05:51 +0000 (0:00:00.254) 0:00:11.087 ******** 2026-03-26 04:05:55.064317 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-26 04:05:55.064328 | orchestrator | 2026-03-26 04:05:55.064339 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-03-26 04:05:55.064350 | orchestrator | Thursday 26 March 2026 04:05:52 +0000 (0:00:01.280) 0:00:12.368 ******** 2026-03-26 04:05:55.064368 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-26 04:05:55.064379 | orchestrator | 2026-03-26 04:05:55.064390 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-03-26 04:05:55.064401 | orchestrator | Thursday 26 March 2026 04:05:52 +0000 (0:00:00.256) 0:00:12.625 ******** 2026-03-26 04:05:55.064412 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-26 04:05:55.064422 | orchestrator | 2026-03-26 04:05:55.064433 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-26 04:05:55.064444 | orchestrator | Thursday 26 March 2026 04:05:52 +0000 (0:00:00.254) 0:00:12.879 ******** 2026-03-26 04:05:55.064455 | orchestrator | 2026-03-26 04:05:55.064466 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-26 04:05:55.064477 | orchestrator | Thursday 26 March 2026 04:05:52 +0000 (0:00:00.071) 0:00:12.951 ******** 2026-03-26 04:05:55.064488 | orchestrator | 2026-03-26 04:05:55.064499 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-26 04:05:55.064509 | orchestrator | Thursday 26 March 2026 04:05:53 +0000 (0:00:00.091) 0:00:13.042 ******** 2026-03-26 04:05:55.064520 | orchestrator | 2026-03-26 04:05:55.064531 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-03-26 04:05:55.064542 | orchestrator | Thursday 26 March 2026 04:05:53 +0000 (0:00:00.270) 0:00:13.313 ******** 2026-03-26 04:05:55.064552 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-03-26 04:05:55.064563 | orchestrator | 2026-03-26 04:05:55.064579 | orchestrator | TASK [Print report file information] ******************************************* 2026-03-26 04:05:55.064591 | orchestrator | Thursday 26 March 2026 04:05:54 +0000 (0:00:01.330) 0:00:14.644 ******** 2026-03-26 04:05:55.064602 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-03-26 04:05:55.064613 | orchestrator |  "msg": [ 2026-03-26 04:05:55.064624 | orchestrator |  "Validator run completed.", 2026-03-26 04:05:55.064636 | orchestrator |  "You can find the report file here:", 2026-03-26 04:05:55.064647 | orchestrator |  "/opt/reports/validator/ceph-mgrs-validator-2026-03-26T04:05:41+00:00-report.json", 2026-03-26 04:05:55.064659 | orchestrator |  "on the following host:", 2026-03-26 04:05:55.064670 | orchestrator |  "testbed-manager" 2026-03-26 04:05:55.064681 | orchestrator |  ] 2026-03-26 04:05:55.064692 | orchestrator | } 2026-03-26 04:05:55.064703 | orchestrator | 2026-03-26 04:05:55.064714 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-26 04:05:55.064726 | orchestrator | testbed-node-0 : ok=19  changed=3  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-26 04:05:55.064770 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-26 04:05:55.064790 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-26 04:05:55.457905 | orchestrator | 2026-03-26 04:05:55.458001 | orchestrator | 2026-03-26 04:05:55.458064 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-26 04:05:55.458078 | orchestrator | Thursday 26 March 2026 04:05:55 +0000 (0:00:00.405) 0:00:15.049 ******** 2026-03-26 04:05:55.458088 | orchestrator | =============================================================================== 2026-03-26 04:05:55.458097 | orchestrator | Gather list of mgr modules ---------------------------------------------- 2.08s 2026-03-26 04:05:55.458106 | orchestrator | Write report file ------------------------------------------------------- 1.33s 2026-03-26 04:05:55.458115 | orchestrator | Aggregate test results step one ----------------------------------------- 1.28s 2026-03-26 04:05:55.458124 | orchestrator | Get container info ------------------------------------------------------ 1.01s 2026-03-26 04:05:55.458133 | orchestrator | Create report output directory ------------------------------------------ 0.98s 2026-03-26 04:05:55.458166 | orchestrator | Get timestamp for report file ------------------------------------------- 0.83s 2026-03-26 04:05:55.458176 | orchestrator | Set test result to passed if ceph-mgr is running ------------------------ 0.54s 2026-03-26 04:05:55.458185 | orchestrator | Set test result to passed if container is existing ---------------------- 0.54s 2026-03-26 04:05:55.458194 | orchestrator | Parse mgr module list from json ----------------------------------------- 0.46s 2026-03-26 04:05:55.458203 | orchestrator | Flush handlers ---------------------------------------------------------- 0.43s 2026-03-26 04:05:55.458212 | orchestrator | Print report file information ------------------------------------------- 0.41s 2026-03-26 04:05:55.458220 | orchestrator | Prepare test data ------------------------------------------------------- 0.34s 2026-03-26 04:05:55.458229 | orchestrator | Extract list of enabled mgr modules ------------------------------------- 0.32s 2026-03-26 04:05:55.458238 | orchestrator | Set test result to failed if container is missing ----------------------- 0.32s 2026-03-26 04:05:55.458247 | orchestrator | Prepare test data for container existance test -------------------------- 0.31s 2026-03-26 04:05:55.458256 | orchestrator | Set test result to failed if ceph-mgr is not running -------------------- 0.30s 2026-03-26 04:05:55.458264 | orchestrator | Set validation result to passed if no test failed ----------------------- 0.26s 2026-03-26 04:05:55.458273 | orchestrator | Aggregate test results step two ----------------------------------------- 0.26s 2026-03-26 04:05:55.458282 | orchestrator | Set validation result to failed if a test failed ------------------------ 0.25s 2026-03-26 04:05:55.458291 | orchestrator | Aggregate test results step three --------------------------------------- 0.25s 2026-03-26 04:05:55.775384 | orchestrator | + osism validate ceph-osds 2026-03-26 04:06:17.219350 | orchestrator | 2026-03-26 04:06:17.219458 | orchestrator | PLAY [Ceph validate OSDs] ****************************************************** 2026-03-26 04:06:17.219475 | orchestrator | 2026-03-26 04:06:17.219488 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-03-26 04:06:17.219500 | orchestrator | Thursday 26 March 2026 04:06:12 +0000 (0:00:00.441) 0:00:00.441 ******** 2026-03-26 04:06:17.219511 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-26 04:06:17.219523 | orchestrator | 2026-03-26 04:06:17.219548 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-26 04:06:17.219559 | orchestrator | Thursday 26 March 2026 04:06:13 +0000 (0:00:00.872) 0:00:01.314 ******** 2026-03-26 04:06:17.219571 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-26 04:06:17.219582 | orchestrator | 2026-03-26 04:06:17.219593 | orchestrator | TASK [Create report output directory] ****************************************** 2026-03-26 04:06:17.219604 | orchestrator | Thursday 26 March 2026 04:06:14 +0000 (0:00:00.554) 0:00:01.869 ******** 2026-03-26 04:06:17.219615 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-26 04:06:17.219626 | orchestrator | 2026-03-26 04:06:17.219637 | orchestrator | TASK [Define report vars] ****************************************************** 2026-03-26 04:06:17.219648 | orchestrator | Thursday 26 March 2026 04:06:14 +0000 (0:00:00.789) 0:00:02.658 ******** 2026-03-26 04:06:17.219659 | orchestrator | ok: [testbed-node-3] 2026-03-26 04:06:17.219673 | orchestrator | 2026-03-26 04:06:17.219685 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-03-26 04:06:17.219696 | orchestrator | Thursday 26 March 2026 04:06:14 +0000 (0:00:00.133) 0:00:02.792 ******** 2026-03-26 04:06:17.219707 | orchestrator | skipping: [testbed-node-3] 2026-03-26 04:06:17.219719 | orchestrator | 2026-03-26 04:06:17.219730 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-03-26 04:06:17.219805 | orchestrator | Thursday 26 March 2026 04:06:15 +0000 (0:00:00.123) 0:00:02.916 ******** 2026-03-26 04:06:17.219825 | orchestrator | skipping: [testbed-node-3] 2026-03-26 04:06:17.219843 | orchestrator | skipping: [testbed-node-4] 2026-03-26 04:06:17.219854 | orchestrator | skipping: [testbed-node-5] 2026-03-26 04:06:17.219865 | orchestrator | 2026-03-26 04:06:17.219876 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-03-26 04:06:17.219915 | orchestrator | Thursday 26 March 2026 04:06:15 +0000 (0:00:00.312) 0:00:03.228 ******** 2026-03-26 04:06:17.219929 | orchestrator | ok: [testbed-node-3] 2026-03-26 04:06:17.219942 | orchestrator | 2026-03-26 04:06:17.219955 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-03-26 04:06:17.219968 | orchestrator | Thursday 26 March 2026 04:06:15 +0000 (0:00:00.161) 0:00:03.390 ******** 2026-03-26 04:06:17.219980 | orchestrator | ok: [testbed-node-3] 2026-03-26 04:06:17.219993 | orchestrator | ok: [testbed-node-4] 2026-03-26 04:06:17.220006 | orchestrator | ok: [testbed-node-5] 2026-03-26 04:06:17.220019 | orchestrator | 2026-03-26 04:06:17.220031 | orchestrator | TASK [Calculate total number of OSDs in cluster] ******************************* 2026-03-26 04:06:17.220045 | orchestrator | Thursday 26 March 2026 04:06:15 +0000 (0:00:00.315) 0:00:03.705 ******** 2026-03-26 04:06:17.220057 | orchestrator | ok: [testbed-node-3] 2026-03-26 04:06:17.220070 | orchestrator | 2026-03-26 04:06:17.220082 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-26 04:06:17.220095 | orchestrator | Thursday 26 March 2026 04:06:16 +0000 (0:00:00.784) 0:00:04.490 ******** 2026-03-26 04:06:17.220108 | orchestrator | ok: [testbed-node-3] 2026-03-26 04:06:17.220121 | orchestrator | ok: [testbed-node-4] 2026-03-26 04:06:17.220133 | orchestrator | ok: [testbed-node-5] 2026-03-26 04:06:17.220146 | orchestrator | 2026-03-26 04:06:17.220158 | orchestrator | TASK [Get list of ceph-osd containers on host] ********************************* 2026-03-26 04:06:17.220171 | orchestrator | Thursday 26 March 2026 04:06:16 +0000 (0:00:00.292) 0:00:04.782 ******** 2026-03-26 04:06:17.220186 | orchestrator | skipping: [testbed-node-3] => (item={'id': '6a47ae157c609fdba1c3cb78cfabe35faae1300906bbf5c42bf9812cf5ae3fc3', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 9 minutes'})  2026-03-26 04:06:17.220202 | orchestrator | skipping: [testbed-node-3] => (item={'id': '19873f576682f61766adcb2a0a3bd7e7d79b6aaf7f359db744e235b802241d18', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 10 minutes'})  2026-03-26 04:06:17.220217 | orchestrator | skipping: [testbed-node-3] => (item={'id': '64dc82767cbb91bfba3d4fbe0f910f1c1b95d0856e1921a113524e50b04683ae', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 10 minutes'})  2026-03-26 04:06:17.220230 | orchestrator | skipping: [testbed-node-3] => (item={'id': '905865d8832bd6a48e47e77f25a307fabc6f4262f500433c693482e257e98351', 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'name': '/ceilometer_compute', 'state': 'running', 'status': 'Up 20 minutes (unhealthy)'})  2026-03-26 04:06:17.220243 | orchestrator | skipping: [testbed-node-3] => (item={'id': '0a2ea494cefa3bc62f3a02d275955bf06259547cc9c9e7d9a6af1873fa1e568c', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 41 minutes (healthy)'})  2026-03-26 04:06:17.220318 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'eedb79789b5f6a192a1b22b4016b1979f5d3369e8b3a5882bd980f73e378d679', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 42 minutes (healthy)'})  2026-03-26 04:06:17.220332 | orchestrator | skipping: [testbed-node-3] => (item={'id': '99dc660051ff8aa712dc0eb018d8040e264e7abe5fee8210dbf2a9b2887da177', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 43 minutes (healthy)'})  2026-03-26 04:06:17.220344 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'b7b3e024c8e1cc2e99343381e2b424833301da2438f00dc4ab4f87d51ae0ce3d', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 49 minutes (healthy)'})  2026-03-26 04:06:17.220362 | orchestrator | skipping: [testbed-node-3] => (item={'id': '261fb7fe11f046be78125f2f61829982321ff86ca846ffb773a8692a92cdc390', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-3-rgw0', 'state': 'running', 'status': 'Up About an hour'})  2026-03-26 04:06:17.220380 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'a3e70f4017c0ded81315b1398083acb671abe365698c575bfc6fc4c9e4d55223', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-3', 'state': 'running', 'status': 'Up About an hour'})  2026-03-26 04:06:17.220392 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'eacf9bcf824e7e1d8ff361b32cadee04f8af28293f8b9c40a61bc7a939d2150b', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-3', 'state': 'running', 'status': 'Up About an hour'})  2026-03-26 04:06:17.220405 | orchestrator | ok: [testbed-node-3] => (item={'id': 'dd1dcbbb5855901401655533a58e1a34834b99fe4c43bf30a9fcba2126bd3e51', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-0', 'state': 'running', 'status': 'Up About an hour'}) 2026-03-26 04:06:17.220417 | orchestrator | ok: [testbed-node-3] => (item={'id': '30b29e57211792d4ff597f247627c18d4ebc0c7ea1e4f1feaa4a30caa4ab2f31', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-3', 'state': 'running', 'status': 'Up About an hour'}) 2026-03-26 04:06:17.220429 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'ce801b2b1273df80d38b32bfc814e9fae0bff5a3f34069b7cd0581325b281bfc', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up About an hour'})  2026-03-26 04:06:17.220441 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'c319b5f6aba28f04e9d72db4d66c7ea078f78b2519ee091016ef2335532f1d29', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-03-26 04:06:17.220452 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'a8c7c868a3e502d499713dbb69d7049568a671e189c4d31df99d14f7de70569d', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-03-26 04:06:17.220464 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'ebb9b42d163cdb490d062e6074b99691af113a221b62481074d5a150e8044129', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'name': '/cron', 'state': 'running', 'status': 'Up 2 hours'})  2026-03-26 04:06:17.220476 | orchestrator | skipping: [testbed-node-3] => (item={'id': '2c477485e8286e4a0e2e6d3499c0dc253c6c858a2d7106f81cae62bb3c374eea', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 2 hours'})  2026-03-26 04:06:17.220487 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'f2f4183c1750edb2a41639c237d8eb38ee30d96ee3593749616ba4f4b9c5c515', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'name': '/fluentd', 'state': 'running', 'status': 'Up 2 hours'})  2026-03-26 04:06:17.220500 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'e18421415e4ffa13dbe3a848dfdf8b91a1323c1f464879f5ec2e6aab512a0f6c', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 9 minutes'})  2026-03-26 04:06:17.220518 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'd775a0281c84c1125a58c04446db7e74566793216c3126ce998f8163abb8bc89', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 10 minutes'})  2026-03-26 04:06:17.463450 | orchestrator | skipping: [testbed-node-4] => (item={'id': '9ef2f3ed84c16fec0eb076bb311683c068143380e4f67d2cb7d7ab8c906fed0b', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 10 minutes'})  2026-03-26 04:06:17.463569 | orchestrator | skipping: [testbed-node-4] => (item={'id': '3ac4c9380a953c178f1b4d8a9ebd52716be1370e8865b0c2c7891f740028771a', 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'name': '/ceilometer_compute', 'state': 'running', 'status': 'Up 20 minutes (unhealthy)'})  2026-03-26 04:06:17.463582 | orchestrator | skipping: [testbed-node-4] => (item={'id': '281009e18ac6262f07d27fc06505b3524e5a3cb883aa16c0504e87f326290c63', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 41 minutes (healthy)'})  2026-03-26 04:06:17.463613 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'ac7b1afd4a8f6ca6f58ac6dc079c6e531e13b8a15c2c277555ce9c5de25c2d64', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 42 minutes (healthy)'})  2026-03-26 04:06:17.463624 | orchestrator | skipping: [testbed-node-4] => (item={'id': '5b1338247b29769164d10e260f095122dca1f89297144212768f1fd6c4fb6fb7', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 43 minutes (healthy)'})  2026-03-26 04:06:17.463636 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'ff5b1b914ed4c437466f8660c32a9afbeb3695e9ac20e478f9c1ce409f1ed09e', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 49 minutes (healthy)'})  2026-03-26 04:06:17.463647 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'bdc4ceb66ebf8c37ba5c65183f623ca08c64fe8018a102a4f0f5eabea3c15fc2', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-4-rgw0', 'state': 'running', 'status': 'Up About an hour'})  2026-03-26 04:06:17.463660 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'e79d34b43243c4ec6976fac2d1089c8f2f3486f55094e31b12c811ad29a25c1c', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-4', 'state': 'running', 'status': 'Up About an hour'})  2026-03-26 04:06:17.463672 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'f3218cdcc9e6ffb5ab7cbc4e409a97c5cba656cef510c1e994aae98145b31869', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-4', 'state': 'running', 'status': 'Up About an hour'})  2026-03-26 04:06:17.463684 | orchestrator | ok: [testbed-node-4] => (item={'id': '49e8eee6f6a9043366b38d966aa36fed7ffa7a7c6fc5094b528cb58857b99f0f', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-2', 'state': 'running', 'status': 'Up About an hour'}) 2026-03-26 04:06:17.463696 | orchestrator | ok: [testbed-node-4] => (item={'id': 'c2bbcab38cdb8bcb221ca845f7476b10094b0cf3cf85313738dd3864b6053f52', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-5', 'state': 'running', 'status': 'Up About an hour'}) 2026-03-26 04:06:17.463708 | orchestrator | skipping: [testbed-node-4] => (item={'id': '612bc1ad6c697915f5f79c2f9ad3754ced490c89063b9bfdddf3dc5ed1737430', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up About an hour'})  2026-03-26 04:06:17.463719 | orchestrator | skipping: [testbed-node-4] => (item={'id': '2314752bd091bb01a3c9c9f4d8f863e576f800a689d5703521d8306adffd76e2', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-03-26 04:06:17.463731 | orchestrator | skipping: [testbed-node-4] => (item={'id': '0a9ebc944b2de755c4512d2b1442597d5fe4b5297d62fadd6fd92902e760637f', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-03-26 04:06:17.463812 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'd155fbae7e1e3e7432c43233b801b4ce10005b36cfb24888760f62814956a602', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'name': '/cron', 'state': 'running', 'status': 'Up 2 hours'})  2026-03-26 04:06:17.463825 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'abe890022cb2e1a31afca9ad0fc40511ef374822a7fcd357597aec36b3b503d0', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 2 hours'})  2026-03-26 04:06:17.463837 | orchestrator | skipping: [testbed-node-4] => (item={'id': '5a0b086f92fe22dde447751035ee4785535bb4d9089ba71717434afd60163dc7', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'name': '/fluentd', 'state': 'running', 'status': 'Up 2 hours'})  2026-03-26 04:06:17.463848 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'ea7350e3ce894a85dd937116b86ff58eefc778996eb3fc48d232d1c1dac27def', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 9 minutes'})  2026-03-26 04:06:17.463864 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'da395bf52b74e3fda92ab38bc0ddabb7e29028090e8635bdc44d7151a273550e', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 10 minutes'})  2026-03-26 04:06:17.463876 | orchestrator | skipping: [testbed-node-5] => (item={'id': '7d2f8364b0ceaf10c88bb844de62d071fc4f0a0c28a054f6da0b22c9d76e89c3', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 10 minutes'})  2026-03-26 04:06:17.463887 | orchestrator | skipping: [testbed-node-5] => (item={'id': '7a175bd7fe74190008d3ab3b9bbb0e7d06a5c2d88606abf1f5094899ee6eeadb', 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'name': '/ceilometer_compute', 'state': 'running', 'status': 'Up 20 minutes (unhealthy)'})  2026-03-26 04:06:17.463898 | orchestrator | skipping: [testbed-node-5] => (item={'id': '9c0845a7a0f6ebf9eaf9aa84f6ffbeab245c4ade1433d91304b63e3259f88214', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 41 minutes (healthy)'})  2026-03-26 04:06:17.463910 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'b7e86c8b6f142da7eb27605a4d20d1731392d015424cc821aa9ce2b9dfb09a49', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 42 minutes (healthy)'})  2026-03-26 04:06:17.463922 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'f7d648df1e97280d44e097087185c63d2d79f38b50b281d365f017a81c42fc34', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 43 minutes (healthy)'})  2026-03-26 04:06:17.463932 | orchestrator | skipping: [testbed-node-5] => (item={'id': '8046f86788dee700330fed8eddfd693504584dd39873e897efa033060d800080', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 49 minutes (healthy)'})  2026-03-26 04:06:17.463943 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'b2877433cc1e735f486090df103fc266a4811a2c8e87200a6ec7cb1adbcafcc4', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-5-rgw0', 'state': 'running', 'status': 'Up About an hour'})  2026-03-26 04:06:17.463954 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'a31876dbc950b24a0d90116efabb4b484220e273b36b95211ea490ba31cbf16f', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-5', 'state': 'running', 'status': 'Up About an hour'})  2026-03-26 04:06:17.463973 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'c7485c82d2f1d547f6bd4f7b5f77839137922770f8e90166964bc693f5838c04', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-5', 'state': 'running', 'status': 'Up About an hour'})  2026-03-26 04:06:17.463984 | orchestrator | ok: [testbed-node-5] => (item={'id': '53b57430f3c2ea3e26a0313bd0f570ea9f96ee2cccf3420967739a15176d48d7', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-4', 'state': 'running', 'status': 'Up About an hour'}) 2026-03-26 04:06:17.464005 | orchestrator | ok: [testbed-node-5] => (item={'id': '89e6fe0b290936f07e5cd92c26a65d7e4bad4aab8803b8ba0f073aa6f33518d8', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-1', 'state': 'running', 'status': 'Up About an hour'}) 2026-03-26 04:06:28.839388 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'da50a7d28d968683ce4d4ca44f0cc1ebfe8cb21bf5813a11fffb890dc0b60b07', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up About an hour'})  2026-03-26 04:06:28.839503 | orchestrator | skipping: [testbed-node-5] => (item={'id': '837bfa57508f3be2114e78e4ad24b6ea1a5bad3c5b3d7e9b2025c2367a5971e5', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-03-26 04:06:28.839522 | orchestrator | skipping: [testbed-node-5] => (item={'id': '4d99cd8c4e4111585315968fcf3c1d2f5e140741723d97a5b22057ceeb981db5', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-03-26 04:06:28.839537 | orchestrator | skipping: [testbed-node-5] => (item={'id': '30b368a03fb8c09907c752cec0533ecbd1ff372b353299d4030ee6f65a37b256', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'name': '/cron', 'state': 'running', 'status': 'Up 2 hours'})  2026-03-26 04:06:28.839551 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'a6edd44b74454d1b03eb0b9925de800df0e9da371df15c735dedb32ddf11cc06', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 2 hours'})  2026-03-26 04:06:28.839563 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'af8e0c0cd5013d2851f9115b1362788481251f73a98ac8f2abdd4398c7c1a67b', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'name': '/fluentd', 'state': 'running', 'status': 'Up 2 hours'})  2026-03-26 04:06:28.839576 | orchestrator | 2026-03-26 04:06:28.839590 | orchestrator | TASK [Get count of ceph-osd containers on host] ******************************** 2026-03-26 04:06:28.839603 | orchestrator | Thursday 26 March 2026 04:06:17 +0000 (0:00:00.515) 0:00:05.298 ******** 2026-03-26 04:06:28.839615 | orchestrator | ok: [testbed-node-3] 2026-03-26 04:06:28.839627 | orchestrator | ok: [testbed-node-4] 2026-03-26 04:06:28.839639 | orchestrator | ok: [testbed-node-5] 2026-03-26 04:06:28.839650 | orchestrator | 2026-03-26 04:06:28.839662 | orchestrator | TASK [Set test result to failed when count of containers is wrong] ************* 2026-03-26 04:06:28.839673 | orchestrator | Thursday 26 March 2026 04:06:17 +0000 (0:00:00.296) 0:00:05.594 ******** 2026-03-26 04:06:28.839684 | orchestrator | skipping: [testbed-node-3] 2026-03-26 04:06:28.839697 | orchestrator | skipping: [testbed-node-4] 2026-03-26 04:06:28.839708 | orchestrator | skipping: [testbed-node-5] 2026-03-26 04:06:28.839719 | orchestrator | 2026-03-26 04:06:28.839731 | orchestrator | TASK [Set test result to passed if count matches] ****************************** 2026-03-26 04:06:28.839781 | orchestrator | Thursday 26 March 2026 04:06:18 +0000 (0:00:00.481) 0:00:06.075 ******** 2026-03-26 04:06:28.839793 | orchestrator | ok: [testbed-node-3] 2026-03-26 04:06:28.839804 | orchestrator | ok: [testbed-node-4] 2026-03-26 04:06:28.839815 | orchestrator | ok: [testbed-node-5] 2026-03-26 04:06:28.839826 | orchestrator | 2026-03-26 04:06:28.839838 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-26 04:06:28.839872 | orchestrator | Thursday 26 March 2026 04:06:18 +0000 (0:00:00.313) 0:00:06.389 ******** 2026-03-26 04:06:28.839884 | orchestrator | ok: [testbed-node-3] 2026-03-26 04:06:28.839895 | orchestrator | ok: [testbed-node-4] 2026-03-26 04:06:28.839906 | orchestrator | ok: [testbed-node-5] 2026-03-26 04:06:28.839917 | orchestrator | 2026-03-26 04:06:28.839931 | orchestrator | TASK [Get list of ceph-osd containers that are not running] ******************** 2026-03-26 04:06:28.839945 | orchestrator | Thursday 26 March 2026 04:06:18 +0000 (0:00:00.304) 0:00:06.693 ******** 2026-03-26 04:06:28.839975 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-0', 'osd_id': '0', 'state': 'running'})  2026-03-26 04:06:28.839990 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-3', 'osd_id': '3', 'state': 'running'})  2026-03-26 04:06:28.840003 | orchestrator | skipping: [testbed-node-3] 2026-03-26 04:06:28.840017 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-2', 'osd_id': '2', 'state': 'running'})  2026-03-26 04:06:28.840030 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-5', 'osd_id': '5', 'state': 'running'})  2026-03-26 04:06:28.840042 | orchestrator | skipping: [testbed-node-4] 2026-03-26 04:06:28.840055 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-4', 'osd_id': '4', 'state': 'running'})  2026-03-26 04:06:28.840068 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-1', 'osd_id': '1', 'state': 'running'})  2026-03-26 04:06:28.840080 | orchestrator | skipping: [testbed-node-5] 2026-03-26 04:06:28.840093 | orchestrator | 2026-03-26 04:06:28.840107 | orchestrator | TASK [Get count of ceph-osd containers that are not running] ******************* 2026-03-26 04:06:28.840120 | orchestrator | Thursday 26 March 2026 04:06:19 +0000 (0:00:00.326) 0:00:07.020 ******** 2026-03-26 04:06:28.840133 | orchestrator | ok: [testbed-node-3] 2026-03-26 04:06:28.840145 | orchestrator | ok: [testbed-node-4] 2026-03-26 04:06:28.840158 | orchestrator | ok: [testbed-node-5] 2026-03-26 04:06:28.840171 | orchestrator | 2026-03-26 04:06:28.840184 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-03-26 04:06:28.840197 | orchestrator | Thursday 26 March 2026 04:06:19 +0000 (0:00:00.504) 0:00:07.524 ******** 2026-03-26 04:06:28.840210 | orchestrator | skipping: [testbed-node-3] 2026-03-26 04:06:28.840241 | orchestrator | skipping: [testbed-node-4] 2026-03-26 04:06:28.840255 | orchestrator | skipping: [testbed-node-5] 2026-03-26 04:06:28.840268 | orchestrator | 2026-03-26 04:06:28.840281 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-03-26 04:06:28.840292 | orchestrator | Thursday 26 March 2026 04:06:19 +0000 (0:00:00.304) 0:00:07.828 ******** 2026-03-26 04:06:28.840304 | orchestrator | skipping: [testbed-node-3] 2026-03-26 04:06:28.840315 | orchestrator | skipping: [testbed-node-4] 2026-03-26 04:06:28.840326 | orchestrator | skipping: [testbed-node-5] 2026-03-26 04:06:28.840337 | orchestrator | 2026-03-26 04:06:28.840348 | orchestrator | TASK [Set test result to passed if all containers are running] ***************** 2026-03-26 04:06:28.840359 | orchestrator | Thursday 26 March 2026 04:06:20 +0000 (0:00:00.307) 0:00:08.136 ******** 2026-03-26 04:06:28.840370 | orchestrator | ok: [testbed-node-3] 2026-03-26 04:06:28.840381 | orchestrator | ok: [testbed-node-4] 2026-03-26 04:06:28.840391 | orchestrator | ok: [testbed-node-5] 2026-03-26 04:06:28.840402 | orchestrator | 2026-03-26 04:06:28.840413 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-03-26 04:06:28.840424 | orchestrator | Thursday 26 March 2026 04:06:20 +0000 (0:00:00.336) 0:00:08.472 ******** 2026-03-26 04:06:28.840435 | orchestrator | skipping: [testbed-node-3] 2026-03-26 04:06:28.840446 | orchestrator | 2026-03-26 04:06:28.840462 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-03-26 04:06:28.840474 | orchestrator | Thursday 26 March 2026 04:06:21 +0000 (0:00:00.680) 0:00:09.153 ******** 2026-03-26 04:06:28.840485 | orchestrator | skipping: [testbed-node-3] 2026-03-26 04:06:28.840496 | orchestrator | 2026-03-26 04:06:28.840507 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-03-26 04:06:28.840526 | orchestrator | Thursday 26 March 2026 04:06:21 +0000 (0:00:00.275) 0:00:09.428 ******** 2026-03-26 04:06:28.840537 | orchestrator | skipping: [testbed-node-3] 2026-03-26 04:06:28.840548 | orchestrator | 2026-03-26 04:06:28.840559 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-26 04:06:28.840570 | orchestrator | Thursday 26 March 2026 04:06:21 +0000 (0:00:00.255) 0:00:09.683 ******** 2026-03-26 04:06:28.840581 | orchestrator | 2026-03-26 04:06:28.840592 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-26 04:06:28.840603 | orchestrator | Thursday 26 March 2026 04:06:21 +0000 (0:00:00.068) 0:00:09.752 ******** 2026-03-26 04:06:28.840614 | orchestrator | 2026-03-26 04:06:28.840625 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-26 04:06:28.840636 | orchestrator | Thursday 26 March 2026 04:06:21 +0000 (0:00:00.069) 0:00:09.821 ******** 2026-03-26 04:06:28.840647 | orchestrator | 2026-03-26 04:06:28.840658 | orchestrator | TASK [Print report file information] ******************************************* 2026-03-26 04:06:28.840668 | orchestrator | Thursday 26 March 2026 04:06:22 +0000 (0:00:00.079) 0:00:09.901 ******** 2026-03-26 04:06:28.840679 | orchestrator | skipping: [testbed-node-3] 2026-03-26 04:06:28.840690 | orchestrator | 2026-03-26 04:06:28.840701 | orchestrator | TASK [Fail early due to containers not running] ******************************** 2026-03-26 04:06:28.840712 | orchestrator | Thursday 26 March 2026 04:06:22 +0000 (0:00:00.244) 0:00:10.145 ******** 2026-03-26 04:06:28.840723 | orchestrator | skipping: [testbed-node-3] 2026-03-26 04:06:28.840734 | orchestrator | 2026-03-26 04:06:28.840762 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-26 04:06:28.840773 | orchestrator | Thursday 26 March 2026 04:06:22 +0000 (0:00:00.248) 0:00:10.394 ******** 2026-03-26 04:06:28.840784 | orchestrator | ok: [testbed-node-3] 2026-03-26 04:06:28.840795 | orchestrator | ok: [testbed-node-4] 2026-03-26 04:06:28.840807 | orchestrator | ok: [testbed-node-5] 2026-03-26 04:06:28.840818 | orchestrator | 2026-03-26 04:06:28.840829 | orchestrator | TASK [Set _mon_hostname fact] ************************************************** 2026-03-26 04:06:28.840840 | orchestrator | Thursday 26 March 2026 04:06:22 +0000 (0:00:00.286) 0:00:10.680 ******** 2026-03-26 04:06:28.840851 | orchestrator | ok: [testbed-node-3] 2026-03-26 04:06:28.840862 | orchestrator | 2026-03-26 04:06:28.840873 | orchestrator | TASK [Get ceph osd tree] ******************************************************* 2026-03-26 04:06:28.840884 | orchestrator | Thursday 26 March 2026 04:06:23 +0000 (0:00:00.679) 0:00:11.360 ******** 2026-03-26 04:06:28.840895 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-26 04:06:28.840906 | orchestrator | 2026-03-26 04:06:28.840918 | orchestrator | TASK [Parse osd tree from JSON] ************************************************ 2026-03-26 04:06:28.840929 | orchestrator | Thursday 26 March 2026 04:06:25 +0000 (0:00:01.722) 0:00:13.082 ******** 2026-03-26 04:06:28.840939 | orchestrator | ok: [testbed-node-3] 2026-03-26 04:06:28.840950 | orchestrator | 2026-03-26 04:06:28.840961 | orchestrator | TASK [Get OSDs that are not up or in] ****************************************** 2026-03-26 04:06:28.840972 | orchestrator | Thursday 26 March 2026 04:06:25 +0000 (0:00:00.136) 0:00:13.219 ******** 2026-03-26 04:06:28.840983 | orchestrator | ok: [testbed-node-3] 2026-03-26 04:06:28.840994 | orchestrator | 2026-03-26 04:06:28.841006 | orchestrator | TASK [Fail test if OSDs are not up or in] ************************************** 2026-03-26 04:06:28.841017 | orchestrator | Thursday 26 March 2026 04:06:25 +0000 (0:00:00.318) 0:00:13.538 ******** 2026-03-26 04:06:28.841028 | orchestrator | skipping: [testbed-node-3] 2026-03-26 04:06:28.841038 | orchestrator | 2026-03-26 04:06:28.841049 | orchestrator | TASK [Pass test if OSDs are all up and in] ************************************* 2026-03-26 04:06:28.841061 | orchestrator | Thursday 26 March 2026 04:06:25 +0000 (0:00:00.130) 0:00:13.668 ******** 2026-03-26 04:06:28.841071 | orchestrator | ok: [testbed-node-3] 2026-03-26 04:06:28.841083 | orchestrator | 2026-03-26 04:06:28.841094 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-26 04:06:28.841113 | orchestrator | Thursday 26 March 2026 04:06:25 +0000 (0:00:00.131) 0:00:13.800 ******** 2026-03-26 04:06:28.841124 | orchestrator | ok: [testbed-node-3] 2026-03-26 04:06:28.841135 | orchestrator | ok: [testbed-node-4] 2026-03-26 04:06:28.841146 | orchestrator | ok: [testbed-node-5] 2026-03-26 04:06:28.841157 | orchestrator | 2026-03-26 04:06:28.841168 | orchestrator | TASK [List ceph LVM volumes and collect data] ********************************** 2026-03-26 04:06:28.841179 | orchestrator | Thursday 26 March 2026 04:06:26 +0000 (0:00:00.303) 0:00:14.103 ******** 2026-03-26 04:06:28.841190 | orchestrator | changed: [testbed-node-3] 2026-03-26 04:06:28.841201 | orchestrator | changed: [testbed-node-4] 2026-03-26 04:06:28.841212 | orchestrator | changed: [testbed-node-5] 2026-03-26 04:06:39.150279 | orchestrator | 2026-03-26 04:06:39.150384 | orchestrator | TASK [Parse LVM data as JSON] ************************************************** 2026-03-26 04:06:39.150400 | orchestrator | Thursday 26 March 2026 04:06:28 +0000 (0:00:02.574) 0:00:16.678 ******** 2026-03-26 04:06:39.150412 | orchestrator | ok: [testbed-node-3] 2026-03-26 04:06:39.150424 | orchestrator | ok: [testbed-node-4] 2026-03-26 04:06:39.150435 | orchestrator | ok: [testbed-node-5] 2026-03-26 04:06:39.150446 | orchestrator | 2026-03-26 04:06:39.150458 | orchestrator | TASK [Get unencrypted and encrypted OSDs] ************************************** 2026-03-26 04:06:39.150469 | orchestrator | Thursday 26 March 2026 04:06:29 +0000 (0:00:00.326) 0:00:17.004 ******** 2026-03-26 04:06:39.150480 | orchestrator | ok: [testbed-node-3] 2026-03-26 04:06:39.150491 | orchestrator | ok: [testbed-node-4] 2026-03-26 04:06:39.150501 | orchestrator | ok: [testbed-node-5] 2026-03-26 04:06:39.150512 | orchestrator | 2026-03-26 04:06:39.150523 | orchestrator | TASK [Fail if count of encrypted OSDs does not match] ************************** 2026-03-26 04:06:39.150534 | orchestrator | Thursday 26 March 2026 04:06:29 +0000 (0:00:00.581) 0:00:17.586 ******** 2026-03-26 04:06:39.150545 | orchestrator | skipping: [testbed-node-3] 2026-03-26 04:06:39.150557 | orchestrator | skipping: [testbed-node-4] 2026-03-26 04:06:39.150568 | orchestrator | skipping: [testbed-node-5] 2026-03-26 04:06:39.150579 | orchestrator | 2026-03-26 04:06:39.150608 | orchestrator | TASK [Pass if count of encrypted OSDs equals count of OSDs] ******************** 2026-03-26 04:06:39.150620 | orchestrator | Thursday 26 March 2026 04:06:30 +0000 (0:00:00.306) 0:00:17.892 ******** 2026-03-26 04:06:39.150631 | orchestrator | ok: [testbed-node-3] 2026-03-26 04:06:39.150642 | orchestrator | ok: [testbed-node-4] 2026-03-26 04:06:39.150653 | orchestrator | ok: [testbed-node-5] 2026-03-26 04:06:39.150663 | orchestrator | 2026-03-26 04:06:39.150674 | orchestrator | TASK [Fail if count of unencrypted OSDs does not match] ************************ 2026-03-26 04:06:39.150686 | orchestrator | Thursday 26 March 2026 04:06:30 +0000 (0:00:00.565) 0:00:18.458 ******** 2026-03-26 04:06:39.150696 | orchestrator | skipping: [testbed-node-3] 2026-03-26 04:06:39.150707 | orchestrator | skipping: [testbed-node-4] 2026-03-26 04:06:39.150718 | orchestrator | skipping: [testbed-node-5] 2026-03-26 04:06:39.150729 | orchestrator | 2026-03-26 04:06:39.150780 | orchestrator | TASK [Pass if count of unencrypted OSDs equals count of OSDs] ****************** 2026-03-26 04:06:39.150794 | orchestrator | Thursday 26 March 2026 04:06:30 +0000 (0:00:00.311) 0:00:18.769 ******** 2026-03-26 04:06:39.150805 | orchestrator | skipping: [testbed-node-3] 2026-03-26 04:06:39.150817 | orchestrator | skipping: [testbed-node-4] 2026-03-26 04:06:39.150830 | orchestrator | skipping: [testbed-node-5] 2026-03-26 04:06:39.150842 | orchestrator | 2026-03-26 04:06:39.150861 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-03-26 04:06:39.150881 | orchestrator | Thursday 26 March 2026 04:06:31 +0000 (0:00:00.297) 0:00:19.067 ******** 2026-03-26 04:06:39.150902 | orchestrator | ok: [testbed-node-3] 2026-03-26 04:06:39.150921 | orchestrator | ok: [testbed-node-4] 2026-03-26 04:06:39.150939 | orchestrator | ok: [testbed-node-5] 2026-03-26 04:06:39.150959 | orchestrator | 2026-03-26 04:06:39.150980 | orchestrator | TASK [Get CRUSH node data of each OSD host and root node childs] *************** 2026-03-26 04:06:39.151000 | orchestrator | Thursday 26 March 2026 04:06:31 +0000 (0:00:00.503) 0:00:19.571 ******** 2026-03-26 04:06:39.151052 | orchestrator | ok: [testbed-node-3] 2026-03-26 04:06:39.151068 | orchestrator | ok: [testbed-node-4] 2026-03-26 04:06:39.151082 | orchestrator | ok: [testbed-node-5] 2026-03-26 04:06:39.151094 | orchestrator | 2026-03-26 04:06:39.151108 | orchestrator | TASK [Calculate sub test expression results] *********************************** 2026-03-26 04:06:39.151121 | orchestrator | Thursday 26 March 2026 04:06:32 +0000 (0:00:00.776) 0:00:20.347 ******** 2026-03-26 04:06:39.151134 | orchestrator | ok: [testbed-node-3] 2026-03-26 04:06:39.151148 | orchestrator | ok: [testbed-node-4] 2026-03-26 04:06:39.151161 | orchestrator | ok: [testbed-node-5] 2026-03-26 04:06:39.151174 | orchestrator | 2026-03-26 04:06:39.151186 | orchestrator | TASK [Fail test if any sub test failed] **************************************** 2026-03-26 04:06:39.151197 | orchestrator | Thursday 26 March 2026 04:06:32 +0000 (0:00:00.330) 0:00:20.677 ******** 2026-03-26 04:06:39.151208 | orchestrator | skipping: [testbed-node-3] 2026-03-26 04:06:39.151219 | orchestrator | skipping: [testbed-node-4] 2026-03-26 04:06:39.151230 | orchestrator | skipping: [testbed-node-5] 2026-03-26 04:06:39.151240 | orchestrator | 2026-03-26 04:06:39.151252 | orchestrator | TASK [Pass test if no sub test failed] ***************************************** 2026-03-26 04:06:39.151263 | orchestrator | Thursday 26 March 2026 04:06:33 +0000 (0:00:00.339) 0:00:21.017 ******** 2026-03-26 04:06:39.151274 | orchestrator | ok: [testbed-node-3] 2026-03-26 04:06:39.151284 | orchestrator | ok: [testbed-node-4] 2026-03-26 04:06:39.151295 | orchestrator | ok: [testbed-node-5] 2026-03-26 04:06:39.151306 | orchestrator | 2026-03-26 04:06:39.151317 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-03-26 04:06:39.151328 | orchestrator | Thursday 26 March 2026 04:06:33 +0000 (0:00:00.507) 0:00:21.525 ******** 2026-03-26 04:06:39.151339 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-26 04:06:39.151351 | orchestrator | 2026-03-26 04:06:39.151362 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-03-26 04:06:39.151373 | orchestrator | Thursday 26 March 2026 04:06:33 +0000 (0:00:00.283) 0:00:21.808 ******** 2026-03-26 04:06:39.151384 | orchestrator | skipping: [testbed-node-3] 2026-03-26 04:06:39.151395 | orchestrator | 2026-03-26 04:06:39.151406 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-03-26 04:06:39.151417 | orchestrator | Thursday 26 March 2026 04:06:34 +0000 (0:00:00.264) 0:00:22.073 ******** 2026-03-26 04:06:39.151428 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-26 04:06:39.151439 | orchestrator | 2026-03-26 04:06:39.151450 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-03-26 04:06:39.151461 | orchestrator | Thursday 26 March 2026 04:06:35 +0000 (0:00:01.680) 0:00:23.753 ******** 2026-03-26 04:06:39.151472 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-26 04:06:39.151483 | orchestrator | 2026-03-26 04:06:39.151494 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-03-26 04:06:39.151505 | orchestrator | Thursday 26 March 2026 04:06:36 +0000 (0:00:00.276) 0:00:24.030 ******** 2026-03-26 04:06:39.151516 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-26 04:06:39.151527 | orchestrator | 2026-03-26 04:06:39.151556 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-26 04:06:39.151568 | orchestrator | Thursday 26 March 2026 04:06:36 +0000 (0:00:00.248) 0:00:24.279 ******** 2026-03-26 04:06:39.151579 | orchestrator | 2026-03-26 04:06:39.151590 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-26 04:06:39.151601 | orchestrator | Thursday 26 March 2026 04:06:36 +0000 (0:00:00.070) 0:00:24.349 ******** 2026-03-26 04:06:39.151612 | orchestrator | 2026-03-26 04:06:39.151623 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-03-26 04:06:39.151634 | orchestrator | Thursday 26 March 2026 04:06:36 +0000 (0:00:00.070) 0:00:24.419 ******** 2026-03-26 04:06:39.151644 | orchestrator | 2026-03-26 04:06:39.151655 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-03-26 04:06:39.151666 | orchestrator | Thursday 26 March 2026 04:06:36 +0000 (0:00:00.074) 0:00:24.494 ******** 2026-03-26 04:06:39.151685 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-26 04:06:39.151696 | orchestrator | 2026-03-26 04:06:39.151709 | orchestrator | TASK [Print report file information] ******************************************* 2026-03-26 04:06:39.151728 | orchestrator | Thursday 26 March 2026 04:06:38 +0000 (0:00:01.557) 0:00:26.051 ******** 2026-03-26 04:06:39.151795 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => { 2026-03-26 04:06:39.151816 | orchestrator |  "msg": [ 2026-03-26 04:06:39.151837 | orchestrator |  "Validator run completed.", 2026-03-26 04:06:39.151857 | orchestrator |  "You can find the report file here:", 2026-03-26 04:06:39.151876 | orchestrator |  "/opt/reports/validator/ceph-osds-validator-2026-03-26T04:06:13+00:00-report.json", 2026-03-26 04:06:39.151891 | orchestrator |  "on the following host:", 2026-03-26 04:06:39.151903 | orchestrator |  "testbed-manager" 2026-03-26 04:06:39.151914 | orchestrator |  ] 2026-03-26 04:06:39.151925 | orchestrator | } 2026-03-26 04:06:39.151936 | orchestrator | 2026-03-26 04:06:39.151948 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-26 04:06:39.151960 | orchestrator | testbed-node-3 : ok=35  changed=4  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-26 04:06:39.151972 | orchestrator | testbed-node-4 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-26 04:06:39.151984 | orchestrator | testbed-node-5 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-26 04:06:39.151995 | orchestrator | 2026-03-26 04:06:39.152009 | orchestrator | 2026-03-26 04:06:39.152028 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-26 04:06:39.152047 | orchestrator | Thursday 26 March 2026 04:06:38 +0000 (0:00:00.590) 0:00:26.642 ******** 2026-03-26 04:06:39.152065 | orchestrator | =============================================================================== 2026-03-26 04:06:39.152085 | orchestrator | List ceph LVM volumes and collect data ---------------------------------- 2.57s 2026-03-26 04:06:39.152103 | orchestrator | Get ceph osd tree ------------------------------------------------------- 1.72s 2026-03-26 04:06:39.152120 | orchestrator | Aggregate test results step one ----------------------------------------- 1.68s 2026-03-26 04:06:39.152137 | orchestrator | Write report file ------------------------------------------------------- 1.56s 2026-03-26 04:06:39.152156 | orchestrator | Get timestamp for report file ------------------------------------------- 0.87s 2026-03-26 04:06:39.152175 | orchestrator | Create report output directory ------------------------------------------ 0.79s 2026-03-26 04:06:39.152194 | orchestrator | Calculate total number of OSDs in cluster ------------------------------- 0.78s 2026-03-26 04:06:39.152213 | orchestrator | Get CRUSH node data of each OSD host and root node childs --------------- 0.78s 2026-03-26 04:06:39.152231 | orchestrator | Aggregate test results step one ----------------------------------------- 0.68s 2026-03-26 04:06:39.152250 | orchestrator | Set _mon_hostname fact -------------------------------------------------- 0.68s 2026-03-26 04:06:39.152269 | orchestrator | Print report file information ------------------------------------------- 0.59s 2026-03-26 04:06:39.152288 | orchestrator | Get unencrypted and encrypted OSDs -------------------------------------- 0.58s 2026-03-26 04:06:39.152307 | orchestrator | Pass if count of encrypted OSDs equals count of OSDs -------------------- 0.57s 2026-03-26 04:06:39.152318 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.55s 2026-03-26 04:06:39.152329 | orchestrator | Get list of ceph-osd containers on host --------------------------------- 0.52s 2026-03-26 04:06:39.152340 | orchestrator | Pass test if no sub test failed ----------------------------------------- 0.51s 2026-03-26 04:06:39.152351 | orchestrator | Get count of ceph-osd containers that are not running ------------------- 0.50s 2026-03-26 04:06:39.152362 | orchestrator | Prepare test data ------------------------------------------------------- 0.50s 2026-03-26 04:06:39.152383 | orchestrator | Set test result to failed when count of containers is wrong ------------- 0.48s 2026-03-26 04:06:39.152395 | orchestrator | Fail test if any sub test failed ---------------------------------------- 0.34s 2026-03-26 04:06:39.458285 | orchestrator | + sh -c /opt/configuration/scripts/check/200-infrastructure.sh 2026-03-26 04:06:39.466797 | orchestrator | + set -e 2026-03-26 04:06:39.466893 | orchestrator | + source /opt/manager-vars.sh 2026-03-26 04:06:39.468389 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-26 04:06:39.468438 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-26 04:06:39.468458 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-26 04:06:39.468476 | orchestrator | ++ CEPH_VERSION=reef 2026-03-26 04:06:39.468493 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-26 04:06:39.468514 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-26 04:06:39.468536 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-26 04:06:39.468553 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-26 04:06:39.468571 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-26 04:06:39.468588 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-26 04:06:39.468606 | orchestrator | ++ export ARA=false 2026-03-26 04:06:39.468623 | orchestrator | ++ ARA=false 2026-03-26 04:06:39.468642 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-26 04:06:39.468659 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-26 04:06:39.468677 | orchestrator | ++ export TEMPEST=false 2026-03-26 04:06:39.468696 | orchestrator | ++ TEMPEST=false 2026-03-26 04:06:39.468715 | orchestrator | ++ export IS_ZUUL=true 2026-03-26 04:06:39.468735 | orchestrator | ++ IS_ZUUL=true 2026-03-26 04:06:39.468849 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.54 2026-03-26 04:06:39.468869 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.54 2026-03-26 04:06:39.468887 | orchestrator | ++ export EXTERNAL_API=false 2026-03-26 04:06:39.468904 | orchestrator | ++ EXTERNAL_API=false 2026-03-26 04:06:39.468922 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-26 04:06:39.468942 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-26 04:06:39.468960 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-26 04:06:39.468978 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-26 04:06:39.468996 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-26 04:06:39.469016 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-26 04:06:39.469035 | orchestrator | + [[ -e /etc/redhat-release ]] 2026-03-26 04:06:39.469053 | orchestrator | + source /etc/os-release 2026-03-26 04:06:39.469071 | orchestrator | ++ PRETTY_NAME='Ubuntu 24.04.4 LTS' 2026-03-26 04:06:39.469090 | orchestrator | ++ NAME=Ubuntu 2026-03-26 04:06:39.469108 | orchestrator | ++ VERSION_ID=24.04 2026-03-26 04:06:39.469128 | orchestrator | ++ VERSION='24.04.4 LTS (Noble Numbat)' 2026-03-26 04:06:39.469147 | orchestrator | ++ VERSION_CODENAME=noble 2026-03-26 04:06:39.469167 | orchestrator | ++ ID=ubuntu 2026-03-26 04:06:39.469186 | orchestrator | ++ ID_LIKE=debian 2026-03-26 04:06:39.469204 | orchestrator | ++ HOME_URL=https://www.ubuntu.com/ 2026-03-26 04:06:39.469223 | orchestrator | ++ SUPPORT_URL=https://help.ubuntu.com/ 2026-03-26 04:06:39.469242 | orchestrator | ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 2026-03-26 04:06:39.469263 | orchestrator | ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 2026-03-26 04:06:39.469285 | orchestrator | ++ UBUNTU_CODENAME=noble 2026-03-26 04:06:39.469304 | orchestrator | ++ LOGO=ubuntu-logo 2026-03-26 04:06:39.469324 | orchestrator | + [[ ubuntu == \u\b\u\n\t\u ]] 2026-03-26 04:06:39.469360 | orchestrator | + packages='libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client' 2026-03-26 04:06:39.469383 | orchestrator | + dpkg -s libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-03-26 04:06:39.498792 | orchestrator | + sudo apt-get install -y libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-03-26 04:07:02.905886 | orchestrator | 2026-03-26 04:07:02.905979 | orchestrator | # Status of Elasticsearch 2026-03-26 04:07:02.905989 | orchestrator | 2026-03-26 04:07:02.905995 | orchestrator | + pushd /opt/configuration/contrib 2026-03-26 04:07:02.906002 | orchestrator | + echo 2026-03-26 04:07:02.906009 | orchestrator | + echo '# Status of Elasticsearch' 2026-03-26 04:07:02.906043 | orchestrator | + echo 2026-03-26 04:07:02.906051 | orchestrator | + bash nagios-plugins/check_elasticsearch -H api-int.testbed.osism.xyz -s 2026-03-26 04:07:03.080255 | orchestrator | OK - elasticsearch (kolla_logging) is running. status: green; timed_out: false; number_of_nodes: 3; number_of_data_nodes: 3; active_primary_shards: 9; active_shards: 22; relocating_shards: 0; initializing_shards: 0; delayed_unassigned_shards: 0; unassigned_shards: 0 | 'active_primary'=9 'active'=22 'relocating'=0 'init'=0 'delay_unass'=0 'unass'=0 2026-03-26 04:07:03.080561 | orchestrator | 2026-03-26 04:07:03.080581 | orchestrator | # Status of MariaDB 2026-03-26 04:07:03.080593 | orchestrator | 2026-03-26 04:07:03.080604 | orchestrator | + echo 2026-03-26 04:07:03.080614 | orchestrator | + echo '# Status of MariaDB' 2026-03-26 04:07:03.080624 | orchestrator | + echo 2026-03-26 04:07:03.081476 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-03-26 04:07:03.126709 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-26 04:07:03.126823 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-03-26 04:07:03.126837 | orchestrator | + MARIADB_USER=root_shard_0 2026-03-26 04:07:03.126850 | orchestrator | + bash nagios-plugins/check_galera_cluster -u root_shard_0 -p password -H api-int.testbed.osism.xyz -c 1 2026-03-26 04:07:03.197582 | orchestrator | Reading package lists... 2026-03-26 04:07:03.533656 | orchestrator | Building dependency tree... 2026-03-26 04:07:03.533934 | orchestrator | Reading state information... 2026-03-26 04:07:03.896573 | orchestrator | bc is already the newest version (1.07.1-3ubuntu4). 2026-03-26 04:07:03.896649 | orchestrator | bc set to manually installed. 2026-03-26 04:07:03.896659 | orchestrator | 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 2026-03-26 04:07:04.514452 | orchestrator | OK: number of NODES = 3 (wsrep_cluster_size) 2026-03-26 04:07:04.514549 | orchestrator | 2026-03-26 04:07:04.514566 | orchestrator | # Status of Prometheus 2026-03-26 04:07:04.514579 | orchestrator | 2026-03-26 04:07:04.514591 | orchestrator | + echo 2026-03-26 04:07:04.514603 | orchestrator | + echo '# Status of Prometheus' 2026-03-26 04:07:04.514615 | orchestrator | + echo 2026-03-26 04:07:04.514627 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/healthy 2026-03-26 04:07:04.570691 | orchestrator | Unauthorized 2026-03-26 04:07:04.573400 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/ready 2026-03-26 04:07:04.630571 | orchestrator | Unauthorized 2026-03-26 04:07:04.633682 | orchestrator | 2026-03-26 04:07:04.633731 | orchestrator | # Status of RabbitMQ 2026-03-26 04:07:04.633779 | orchestrator | 2026-03-26 04:07:04.633800 | orchestrator | + echo 2026-03-26 04:07:04.633820 | orchestrator | + echo '# Status of RabbitMQ' 2026-03-26 04:07:04.633840 | orchestrator | + echo 2026-03-26 04:07:04.634379 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-03-26 04:07:04.691485 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-26 04:07:04.691575 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-03-26 04:07:04.691592 | orchestrator | + perl nagios-plugins/check_rabbitmq_cluster --ssl 1 -H api-int.testbed.osism.xyz -u openstack -p password 2026-03-26 04:07:05.118600 | orchestrator | RABBITMQ_CLUSTER OK - nb_running_node OK (3) nb_running_disc_node OK (3) nb_running_ram_node OK (0) 2026-03-26 04:07:05.128073 | orchestrator | 2026-03-26 04:07:05.128138 | orchestrator | # Status of Redis 2026-03-26 04:07:05.128152 | orchestrator | 2026-03-26 04:07:05.128164 | orchestrator | + echo 2026-03-26 04:07:05.128176 | orchestrator | + echo '# Status of Redis' 2026-03-26 04:07:05.128188 | orchestrator | + echo 2026-03-26 04:07:05.128201 | orchestrator | + /usr/lib/nagios/plugins/check_tcp -H 192.168.16.10 -p 6379 -A -E -s 'AUTH QHNA1SZRlOKzLADhUd5ZDgpHfQe6dNfr3bwEdY24\r\nPING\r\nINFO replication\r\nQUIT\r\n' -e PONG -e role:master -e slave0:ip=192.168.16.1 -e,port=6379 -j 2026-03-26 04:07:05.136001 | orchestrator | TCP OK - 0.002 second response time on 192.168.16.10 port 6379|time=0.002068s;;;0.000000;10.000000 2026-03-26 04:07:05.136668 | orchestrator | 2026-03-26 04:07:05.136694 | orchestrator | + popd 2026-03-26 04:07:05.136707 | orchestrator | + echo 2026-03-26 04:07:05.136718 | orchestrator | # Create backup of MariaDB database 2026-03-26 04:07:05.136731 | orchestrator | 2026-03-26 04:07:05.136773 | orchestrator | + echo '# Create backup of MariaDB database' 2026-03-26 04:07:05.136786 | orchestrator | + echo 2026-03-26 04:07:05.136798 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=full 2026-03-26 04:07:07.156706 | orchestrator | 2026-03-26 04:07:07 | INFO  | Task 7b1aa3d5-846c-4962-a0bc-63f2776dc6a9 (mariadb_backup) was prepared for execution. 2026-03-26 04:07:07.156826 | orchestrator | 2026-03-26 04:07:07 | INFO  | It takes a moment until task 7b1aa3d5-846c-4962-a0bc-63f2776dc6a9 (mariadb_backup) has been started and output is visible here. 2026-03-26 04:10:26.521655 | orchestrator | 2026-03-26 04:10:26.521882 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-26 04:10:26.521914 | orchestrator | 2026-03-26 04:10:26.521960 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-26 04:10:26.521983 | orchestrator | Thursday 26 March 2026 04:07:11 +0000 (0:00:00.177) 0:00:00.177 ******** 2026-03-26 04:10:26.522182 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:10:26.522213 | orchestrator | ok: [testbed-node-1] 2026-03-26 04:10:26.522234 | orchestrator | ok: [testbed-node-2] 2026-03-26 04:10:26.522255 | orchestrator | 2026-03-26 04:10:26.522275 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-26 04:10:26.522298 | orchestrator | Thursday 26 March 2026 04:07:11 +0000 (0:00:00.313) 0:00:00.491 ******** 2026-03-26 04:10:26.522318 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-03-26 04:10:26.522334 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-03-26 04:10:26.522347 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-03-26 04:10:26.522359 | orchestrator | 2026-03-26 04:10:26.522372 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-03-26 04:10:26.522385 | orchestrator | 2026-03-26 04:10:26.522397 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-03-26 04:10:26.522410 | orchestrator | Thursday 26 March 2026 04:07:12 +0000 (0:00:00.652) 0:00:01.144 ******** 2026-03-26 04:10:26.522423 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-26 04:10:26.522436 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-26 04:10:26.522449 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-26 04:10:26.522462 | orchestrator | 2026-03-26 04:10:26.522484 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-26 04:10:26.522497 | orchestrator | Thursday 26 March 2026 04:07:12 +0000 (0:00:00.429) 0:00:01.574 ******** 2026-03-26 04:10:26.522510 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 04:10:26.522524 | orchestrator | 2026-03-26 04:10:26.522536 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2026-03-26 04:10:26.522547 | orchestrator | Thursday 26 March 2026 04:07:13 +0000 (0:00:00.572) 0:00:02.147 ******** 2026-03-26 04:10:26.522558 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:10:26.522569 | orchestrator | ok: [testbed-node-1] 2026-03-26 04:10:26.522580 | orchestrator | ok: [testbed-node-2] 2026-03-26 04:10:26.522591 | orchestrator | 2026-03-26 04:10:26.522602 | orchestrator | TASK [mariadb : Taking full database backup via Mariabackup] ******************* 2026-03-26 04:10:26.522613 | orchestrator | Thursday 26 March 2026 04:07:16 +0000 (0:00:03.248) 0:00:05.395 ******** 2026-03-26 04:10:26.522624 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:10:26.522635 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:10:26.522646 | orchestrator | 2026-03-26 04:10:26.522657 | orchestrator | STILL ALIVE [task 'mariadb : Taking full database backup via Mariabackup' is running] *** 2026-03-26 04:10:26.522668 | orchestrator | 2026-03-26 04:10:26.522679 | orchestrator | STILL ALIVE [task 'mariadb : Taking full database backup via Mariabackup' is running] *** 2026-03-26 04:10:26.522690 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-03-26 04:10:26.522701 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2026-03-26 04:10:26.522712 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-03-26 04:10:26.522723 | orchestrator | mariadb_bootstrap_restart 2026-03-26 04:10:26.522734 | orchestrator | changed: [testbed-node-0] 2026-03-26 04:10:26.522745 | orchestrator | 2026-03-26 04:10:26.522784 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-03-26 04:10:26.522802 | orchestrator | skipping: no hosts matched 2026-03-26 04:10:26.522813 | orchestrator | 2026-03-26 04:10:26.522824 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-03-26 04:10:26.522835 | orchestrator | skipping: no hosts matched 2026-03-26 04:10:26.522846 | orchestrator | 2026-03-26 04:10:26.522857 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-03-26 04:10:26.522868 | orchestrator | skipping: no hosts matched 2026-03-26 04:10:26.522879 | orchestrator | 2026-03-26 04:10:26.522890 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-03-26 04:10:26.522912 | orchestrator | 2026-03-26 04:10:26.522923 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-03-26 04:10:26.522934 | orchestrator | Thursday 26 March 2026 04:10:25 +0000 (0:03:08.742) 0:03:14.138 ******** 2026-03-26 04:10:26.522945 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:10:26.522956 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:10:26.522967 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:10:26.522978 | orchestrator | 2026-03-26 04:10:26.522989 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-03-26 04:10:26.523000 | orchestrator | Thursday 26 March 2026 04:10:25 +0000 (0:00:00.304) 0:03:14.442 ******** 2026-03-26 04:10:26.523011 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:10:26.523022 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:10:26.523033 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:10:26.523044 | orchestrator | 2026-03-26 04:10:26.523055 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-26 04:10:26.523067 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-26 04:10:26.523080 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-26 04:10:26.523091 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-26 04:10:26.523102 | orchestrator | 2026-03-26 04:10:26.523113 | orchestrator | 2026-03-26 04:10:26.523124 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-26 04:10:26.523135 | orchestrator | Thursday 26 March 2026 04:10:26 +0000 (0:00:00.435) 0:03:14.878 ******** 2026-03-26 04:10:26.523169 | orchestrator | =============================================================================== 2026-03-26 04:10:26.523181 | orchestrator | mariadb : Taking full database backup via Mariabackup ----------------- 188.74s 2026-03-26 04:10:26.523192 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 3.25s 2026-03-26 04:10:26.523203 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.65s 2026-03-26 04:10:26.523214 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.57s 2026-03-26 04:10:26.523225 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.44s 2026-03-26 04:10:26.523236 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.43s 2026-03-26 04:10:26.523247 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.31s 2026-03-26 04:10:26.523258 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.30s 2026-03-26 04:10:26.829074 | orchestrator | + sh -c /opt/configuration/scripts/check/300-openstack.sh 2026-03-26 04:10:26.839194 | orchestrator | + set -e 2026-03-26 04:10:26.839276 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-26 04:10:26.840304 | orchestrator | ++ export INTERACTIVE=false 2026-03-26 04:10:26.840343 | orchestrator | ++ INTERACTIVE=false 2026-03-26 04:10:26.840363 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-26 04:10:26.840500 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-26 04:10:26.840527 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-03-26 04:10:26.843819 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-03-26 04:10:26.851454 | orchestrator | 2026-03-26 04:10:26.851570 | orchestrator | # OpenStack endpoints 2026-03-26 04:10:26.851585 | orchestrator | 2026-03-26 04:10:26.851596 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-26 04:10:26.851608 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-26 04:10:26.851638 | orchestrator | + export OS_CLOUD=admin 2026-03-26 04:10:26.851650 | orchestrator | + OS_CLOUD=admin 2026-03-26 04:10:26.851661 | orchestrator | + echo 2026-03-26 04:10:26.851673 | orchestrator | + echo '# OpenStack endpoints' 2026-03-26 04:10:26.851684 | orchestrator | + echo 2026-03-26 04:10:26.851695 | orchestrator | + openstack endpoint list 2026-03-26 04:10:30.087652 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-03-26 04:10:30.087751 | orchestrator | | ID | Region | Service Name | Service Type | Enabled | Interface | URL | 2026-03-26 04:10:30.087772 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-03-26 04:10:30.087778 | orchestrator | | 0033fc892da9412fa8a093fd5d2a963f | RegionOne | glance | image | True | internal | https://api-int.testbed.osism.xyz:9292 | 2026-03-26 04:10:30.087784 | orchestrator | | 08783899140940f1ab1e28d0eb212655 | RegionOne | octavia | load-balancer | True | internal | https://api-int.testbed.osism.xyz:9876 | 2026-03-26 04:10:30.087789 | orchestrator | | 0a595fefe7e94bd7a92b4bec73d647a6 | RegionOne | keystone | identity | True | public | https://api.testbed.osism.xyz:5000 | 2026-03-26 04:10:30.087793 | orchestrator | | 186cae91475646dfbbc932451c78d918 | RegionOne | neutron | network | True | internal | https://api-int.testbed.osism.xyz:9696 | 2026-03-26 04:10:30.087798 | orchestrator | | 2070bf277fa04949b94df4bb73a01ad5 | RegionOne | barbican | key-manager | True | public | https://api.testbed.osism.xyz:9311 | 2026-03-26 04:10:30.087803 | orchestrator | | 214d96bfc5c74deaa0287323a323b818 | RegionOne | barbican | key-manager | True | internal | https://api-int.testbed.osism.xyz:9311 | 2026-03-26 04:10:30.087807 | orchestrator | | 3089431d8d8b4f0ba7824a5faf669f60 | RegionOne | designate | dns | True | internal | https://api-int.testbed.osism.xyz:9001 | 2026-03-26 04:10:30.087812 | orchestrator | | 35fc90d9137b4d7f92d7ac68014b9fc6 | RegionOne | manila | share | True | internal | https://api-int.testbed.osism.xyz:8786/v1/%(tenant_id)s | 2026-03-26 04:10:30.087817 | orchestrator | | 3b15fd0003e04f9f8aa0c22a2ea26c73 | RegionOne | nova | compute | True | public | https://api.testbed.osism.xyz:8774/v2.1 | 2026-03-26 04:10:30.087821 | orchestrator | | 4fbec3dc7ce74b8cb3c6d0974f2e9e1d | RegionOne | nova | compute | True | internal | https://api-int.testbed.osism.xyz:8774/v2.1 | 2026-03-26 04:10:30.087826 | orchestrator | | 52db55a577c242bea758b13e5034c75a | RegionOne | aodh | alarming | True | public | https://api.testbed.osism.xyz:8042 | 2026-03-26 04:10:30.087831 | orchestrator | | 56d8259ac37946e2aff58d5495dc0f81 | RegionOne | magnum | container-infra | True | internal | https://api-int.testbed.osism.xyz:9511/v1 | 2026-03-26 04:10:30.087836 | orchestrator | | 57fa80cce7544a10a586776995a291a6 | RegionOne | skyline | panel | True | internal | https://api-int.testbed.osism.xyz:9998 | 2026-03-26 04:10:30.087840 | orchestrator | | 75c163e497a14d5bb1f2b9ee146083e9 | RegionOne | neutron | network | True | public | https://api.testbed.osism.xyz:9696 | 2026-03-26 04:10:30.087845 | orchestrator | | 856b24feafb3443b9aaeae1f19b86647 | RegionOne | manilav2 | sharev2 | True | internal | https://api-int.testbed.osism.xyz:8786/v2 | 2026-03-26 04:10:30.087850 | orchestrator | | 8ba59f40ae3b4193b5819b7c08fd3243 | RegionOne | swift | object-store | True | public | https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-03-26 04:10:30.087854 | orchestrator | | 9809bb157e744c398e8f9c7fcac26a1a | RegionOne | placement | placement | True | internal | https://api-int.testbed.osism.xyz:8780 | 2026-03-26 04:10:30.087859 | orchestrator | | 9989f77bad6840df9a22ad29181f93ea | RegionOne | placement | placement | True | public | https://api.testbed.osism.xyz:8780 | 2026-03-26 04:10:30.087869 | orchestrator | | a59948b5184e45e084d6aaa31149f210 | RegionOne | skyline | panel | True | public | https://api.testbed.osism.xyz:9998 | 2026-03-26 04:10:30.087884 | orchestrator | | a5d9ce36abcb47c3b75e5988e8e015df | RegionOne | manilav2 | sharev2 | True | public | https://api.testbed.osism.xyz:8786/v2 | 2026-03-26 04:10:30.087900 | orchestrator | | b53ac5aaf5914c0cb65e717d7642ad90 | RegionOne | designate | dns | True | public | https://api.testbed.osism.xyz:9001 | 2026-03-26 04:10:30.087905 | orchestrator | | c577ae50a66d49879af93d9946c176b8 | RegionOne | cinderv3 | volumev3 | True | internal | https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-03-26 04:10:30.087910 | orchestrator | | c8cf89a51f61454891009bb19cafd4c4 | RegionOne | aodh | alarming | True | internal | https://api-int.testbed.osism.xyz:8042 | 2026-03-26 04:10:30.087915 | orchestrator | | ce290ff5801e446f8041086f7a131bab | RegionOne | octavia | load-balancer | True | public | https://api.testbed.osism.xyz:9876 | 2026-03-26 04:10:30.087920 | orchestrator | | d4f79ca24681457c8ddb3d19e9973a63 | RegionOne | magnum | container-infra | True | public | https://api.testbed.osism.xyz:9511/v1 | 2026-03-26 04:10:30.087924 | orchestrator | | db3adab6cd89404792a59d0f59461602 | RegionOne | cinderv3 | volumev3 | True | public | https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-03-26 04:10:30.087929 | orchestrator | | db945c2decd04c0bb50c44206ee77a79 | RegionOne | keystone | identity | True | internal | https://api-int.testbed.osism.xyz:5000 | 2026-03-26 04:10:30.087934 | orchestrator | | e3fa99a3313a44d1acb8aa7427f045de | RegionOne | glance | image | True | public | https://api.testbed.osism.xyz:9292 | 2026-03-26 04:10:30.087938 | orchestrator | | f1d018f3e5f6498c9420138b7ade2a36 | RegionOne | manila | share | True | public | https://api.testbed.osism.xyz:8786/v1/%(tenant_id)s | 2026-03-26 04:10:30.087943 | orchestrator | | f7da6a40e78f462ea2d8069b207f9e0d | RegionOne | swift | object-store | True | internal | https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-03-26 04:10:30.087948 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-03-26 04:10:30.338563 | orchestrator | 2026-03-26 04:10:30.338660 | orchestrator | # Cinder 2026-03-26 04:10:30.338675 | orchestrator | 2026-03-26 04:10:30.338687 | orchestrator | + echo 2026-03-26 04:10:30.338699 | orchestrator | + echo '# Cinder' 2026-03-26 04:10:30.338710 | orchestrator | + echo 2026-03-26 04:10:30.338722 | orchestrator | + openstack volume service list 2026-03-26 04:10:32.977061 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-03-26 04:10:32.977167 | orchestrator | | Binary | Host | Zone | Status | State | Updated At | 2026-03-26 04:10:32.977183 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-03-26 04:10:32.977195 | orchestrator | | cinder-scheduler | testbed-node-0 | internal | enabled | up | 2026-03-26T04:10:24.000000 | 2026-03-26 04:10:32.977206 | orchestrator | | cinder-scheduler | testbed-node-1 | internal | enabled | up | 2026-03-26T04:10:24.000000 | 2026-03-26 04:10:32.977217 | orchestrator | | cinder-scheduler | testbed-node-2 | internal | enabled | up | 2026-03-26T04:10:24.000000 | 2026-03-26 04:10:32.977228 | orchestrator | | cinder-volume | testbed-node-0@rbd-volumes | nova | enabled | up | 2026-03-26T04:10:24.000000 | 2026-03-26 04:10:32.977238 | orchestrator | | cinder-volume | testbed-node-1@rbd-volumes | nova | enabled | up | 2026-03-26T04:10:31.000000 | 2026-03-26 04:10:32.977277 | orchestrator | | cinder-volume | testbed-node-2@rbd-volumes | nova | enabled | up | 2026-03-26T04:10:32.000000 | 2026-03-26 04:10:32.977289 | orchestrator | | cinder-backup | testbed-node-0 | nova | enabled | up | 2026-03-26T04:10:29.000000 | 2026-03-26 04:10:32.977300 | orchestrator | | cinder-backup | testbed-node-1 | nova | enabled | up | 2026-03-26T04:10:31.000000 | 2026-03-26 04:10:32.977310 | orchestrator | | cinder-backup | testbed-node-2 | nova | enabled | up | 2026-03-26T04:10:32.000000 | 2026-03-26 04:10:32.977321 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-03-26 04:10:33.255712 | orchestrator | 2026-03-26 04:10:33.255856 | orchestrator | # Neutron 2026-03-26 04:10:33.255874 | orchestrator | 2026-03-26 04:10:33.255887 | orchestrator | + echo 2026-03-26 04:10:33.255899 | orchestrator | + echo '# Neutron' 2026-03-26 04:10:33.255911 | orchestrator | + echo 2026-03-26 04:10:33.255922 | orchestrator | + openstack network agent list 2026-03-26 04:10:35.951691 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-03-26 04:10:35.951802 | orchestrator | | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | 2026-03-26 04:10:35.951812 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-03-26 04:10:35.951819 | orchestrator | | testbed-node-3 | OVN Controller agent | testbed-node-3 | | :-) | UP | ovn-controller | 2026-03-26 04:10:35.951842 | orchestrator | | testbed-node-0 | OVN Controller Gateway agent | testbed-node-0 | nova | :-) | UP | ovn-controller | 2026-03-26 04:10:35.951849 | orchestrator | | testbed-node-4 | OVN Controller agent | testbed-node-4 | | :-) | UP | ovn-controller | 2026-03-26 04:10:35.951856 | orchestrator | | testbed-node-1 | OVN Controller Gateway agent | testbed-node-1 | nova | :-) | UP | ovn-controller | 2026-03-26 04:10:35.951862 | orchestrator | | testbed-node-5 | OVN Controller agent | testbed-node-5 | | :-) | UP | ovn-controller | 2026-03-26 04:10:35.951868 | orchestrator | | testbed-node-2 | OVN Controller Gateway agent | testbed-node-2 | nova | :-) | UP | ovn-controller | 2026-03-26 04:10:35.951874 | orchestrator | | 36b9d21c-9928-5c0a-9b27-73ac7a3e770c | OVN Metadata agent | testbed-node-5 | | :-) | UP | neutron-ovn-metadata-agent | 2026-03-26 04:10:35.951881 | orchestrator | | e645415a-98f5-5758-8cd1-c47af282b5c0 | OVN Metadata agent | testbed-node-3 | | :-) | UP | neutron-ovn-metadata-agent | 2026-03-26 04:10:35.951887 | orchestrator | | 4939696e-6092-5a33-bb73-b850064684df | OVN Metadata agent | testbed-node-4 | | :-) | UP | neutron-ovn-metadata-agent | 2026-03-26 04:10:35.951893 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-03-26 04:10:36.233683 | orchestrator | + openstack network service provider list 2026-03-26 04:10:38.763346 | orchestrator | +---------------+------+---------+ 2026-03-26 04:10:38.763460 | orchestrator | | Service Type | Name | Default | 2026-03-26 04:10:38.763484 | orchestrator | +---------------+------+---------+ 2026-03-26 04:10:38.763498 | orchestrator | | L3_ROUTER_NAT | ovn | True | 2026-03-26 04:10:38.763510 | orchestrator | +---------------+------+---------+ 2026-03-26 04:10:39.035538 | orchestrator | 2026-03-26 04:10:39.035632 | orchestrator | # Nova 2026-03-26 04:10:39.035653 | orchestrator | 2026-03-26 04:10:39.035671 | orchestrator | + echo 2026-03-26 04:10:39.035688 | orchestrator | + echo '# Nova' 2026-03-26 04:10:39.035705 | orchestrator | + echo 2026-03-26 04:10:39.035722 | orchestrator | + openstack compute service list 2026-03-26 04:10:41.705998 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-03-26 04:10:41.706134 | orchestrator | | ID | Binary | Host | Zone | Status | State | Updated At | 2026-03-26 04:10:41.706144 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-03-26 04:10:41.706150 | orchestrator | | 86eb2092-fe79-4181-8a5e-f8a408c1e3b9 | nova-scheduler | testbed-node-0 | internal | enabled | up | 2026-03-26T04:10:40.000000 | 2026-03-26 04:10:41.706155 | orchestrator | | dd2eeb5e-c5d1-428b-bbce-901a429360a4 | nova-scheduler | testbed-node-1 | internal | enabled | up | 2026-03-26T04:10:35.000000 | 2026-03-26 04:10:41.706161 | orchestrator | | 5892028d-cfad-462c-9137-727335627c04 | nova-scheduler | testbed-node-2 | internal | enabled | up | 2026-03-26T04:10:37.000000 | 2026-03-26 04:10:41.706167 | orchestrator | | f48bec24-2526-47fb-89c7-f4540915a0fa | nova-conductor | testbed-node-0 | internal | enabled | up | 2026-03-26T04:10:37.000000 | 2026-03-26 04:10:41.706172 | orchestrator | | 7a41d58a-be90-4c07-b6b6-e07900ee5ad4 | nova-conductor | testbed-node-1 | internal | enabled | up | 2026-03-26T04:10:39.000000 | 2026-03-26 04:10:41.706178 | orchestrator | | f0aa1fcd-e220-419e-887b-6a41fe3dd532 | nova-conductor | testbed-node-2 | internal | enabled | up | 2026-03-26T04:10:40.000000 | 2026-03-26 04:10:41.706187 | orchestrator | | 83b43651-bd92-4d18-a2b5-fcbdc87357c5 | nova-compute | testbed-node-5 | nova | enabled | up | 2026-03-26T04:10:35.000000 | 2026-03-26 04:10:41.706193 | orchestrator | | b9e61882-f01c-4632-a3ea-b296d9feba1c | nova-compute | testbed-node-3 | nova | enabled | up | 2026-03-26T04:10:35.000000 | 2026-03-26 04:10:41.706198 | orchestrator | | eff6a403-1dbd-4531-b082-376def61e6f9 | nova-compute | testbed-node-4 | nova | enabled | up | 2026-03-26T04:10:35.000000 | 2026-03-26 04:10:41.706204 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-03-26 04:10:41.969118 | orchestrator | + openstack hypervisor list 2026-03-26 04:10:44.619598 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-03-26 04:10:44.724183 | orchestrator | | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | 2026-03-26 04:10:44.724263 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-03-26 04:10:44.724277 | orchestrator | | 227f7a23-f93c-4b92-b429-5a83b73d9fd5 | testbed-node-4 | QEMU | 192.168.16.14 | up | 2026-03-26 04:10:44.724288 | orchestrator | | 147427a8-4597-4a2c-823c-3005e03af22a | testbed-node-3 | QEMU | 192.168.16.13 | up | 2026-03-26 04:10:44.724300 | orchestrator | | 4a11d35e-b357-4267-8292-b9149dc34db0 | testbed-node-5 | QEMU | 192.168.16.15 | up | 2026-03-26 04:10:44.724317 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-03-26 04:10:44.892059 | orchestrator | + echo 2026-03-26 04:10:44.892272 | orchestrator | 2026-03-26 04:10:44.892294 | orchestrator | # Run OpenStack test play 2026-03-26 04:10:44.892306 | orchestrator | 2026-03-26 04:10:44.892317 | orchestrator | + echo '# Run OpenStack test play' 2026-03-26 04:10:44.892329 | orchestrator | + echo 2026-03-26 04:10:44.892339 | orchestrator | + osism apply --environment openstack test 2026-03-26 04:10:46.872107 | orchestrator | 2026-03-26 04:10:46 | INFO  | Trying to run play test in environment openstack 2026-03-26 04:10:57.052158 | orchestrator | 2026-03-26 04:10:57 | INFO  | Task 49017abc-3eae-4005-afad-1c43fa7ba55f (test) was prepared for execution. 2026-03-26 04:10:57.052270 | orchestrator | 2026-03-26 04:10:57 | INFO  | It takes a moment until task 49017abc-3eae-4005-afad-1c43fa7ba55f (test) has been started and output is visible here. 2026-03-26 04:13:43.592778 | orchestrator | 2026-03-26 04:13:43.592969 | orchestrator | PLAY [Create test project] ***************************************************** 2026-03-26 04:13:43.592994 | orchestrator | 2026-03-26 04:13:43.593006 | orchestrator | TASK [Create test domain] ****************************************************** 2026-03-26 04:13:43.593042 | orchestrator | Thursday 26 March 2026 04:11:01 +0000 (0:00:00.080) 0:00:00.080 ******** 2026-03-26 04:13:43.593054 | orchestrator | changed: [localhost] 2026-03-26 04:13:43.593066 | orchestrator | 2026-03-26 04:13:43.593077 | orchestrator | TASK [Create test-admin user] ************************************************** 2026-03-26 04:13:43.593088 | orchestrator | Thursday 26 March 2026 04:11:04 +0000 (0:00:03.755) 0:00:03.835 ******** 2026-03-26 04:13:43.593098 | orchestrator | changed: [localhost] 2026-03-26 04:13:43.593109 | orchestrator | 2026-03-26 04:13:43.593120 | orchestrator | TASK [Add manager role to user test-admin] ************************************* 2026-03-26 04:13:43.593131 | orchestrator | Thursday 26 March 2026 04:11:09 +0000 (0:00:04.139) 0:00:07.975 ******** 2026-03-26 04:13:43.593142 | orchestrator | changed: [localhost] 2026-03-26 04:13:43.593152 | orchestrator | 2026-03-26 04:13:43.593163 | orchestrator | TASK [Create test project] ***************************************************** 2026-03-26 04:13:43.593174 | orchestrator | Thursday 26 March 2026 04:11:15 +0000 (0:00:06.524) 0:00:14.499 ******** 2026-03-26 04:13:43.593185 | orchestrator | changed: [localhost] 2026-03-26 04:13:43.593195 | orchestrator | 2026-03-26 04:13:43.593206 | orchestrator | TASK [Create test user] ******************************************************** 2026-03-26 04:13:43.593217 | orchestrator | Thursday 26 March 2026 04:11:19 +0000 (0:00:03.974) 0:00:18.474 ******** 2026-03-26 04:13:43.593228 | orchestrator | changed: [localhost] 2026-03-26 04:13:43.593239 | orchestrator | 2026-03-26 04:13:43.593250 | orchestrator | TASK [Add member roles to user test] ******************************************* 2026-03-26 04:13:43.593260 | orchestrator | Thursday 26 March 2026 04:11:23 +0000 (0:00:04.185) 0:00:22.659 ******** 2026-03-26 04:13:43.593271 | orchestrator | changed: [localhost] => (item=load-balancer_member) 2026-03-26 04:13:43.593283 | orchestrator | changed: [localhost] => (item=member) 2026-03-26 04:13:43.593294 | orchestrator | changed: [localhost] => (item=creator) 2026-03-26 04:13:43.593307 | orchestrator | 2026-03-26 04:13:43.593319 | orchestrator | TASK [Create test server group] ************************************************ 2026-03-26 04:13:43.593348 | orchestrator | Thursday 26 March 2026 04:11:35 +0000 (0:00:11.421) 0:00:34.080 ******** 2026-03-26 04:13:43.593361 | orchestrator | changed: [localhost] 2026-03-26 04:13:43.593374 | orchestrator | 2026-03-26 04:13:43.593386 | orchestrator | TASK [Create ssh security group] *********************************************** 2026-03-26 04:13:43.593399 | orchestrator | Thursday 26 March 2026 04:11:40 +0000 (0:00:05.096) 0:00:39.177 ******** 2026-03-26 04:13:43.593411 | orchestrator | changed: [localhost] 2026-03-26 04:13:43.593423 | orchestrator | 2026-03-26 04:13:43.593435 | orchestrator | TASK [Add rule to ssh security group] ****************************************** 2026-03-26 04:13:43.593448 | orchestrator | Thursday 26 March 2026 04:11:45 +0000 (0:00:04.925) 0:00:44.102 ******** 2026-03-26 04:13:43.593460 | orchestrator | changed: [localhost] 2026-03-26 04:13:43.593473 | orchestrator | 2026-03-26 04:13:43.593485 | orchestrator | TASK [Create icmp security group] ********************************************** 2026-03-26 04:13:43.593497 | orchestrator | Thursday 26 March 2026 04:11:49 +0000 (0:00:04.320) 0:00:48.423 ******** 2026-03-26 04:13:43.593509 | orchestrator | changed: [localhost] 2026-03-26 04:13:43.593521 | orchestrator | 2026-03-26 04:13:43.593533 | orchestrator | TASK [Add rule to icmp security group] ***************************************** 2026-03-26 04:13:43.593546 | orchestrator | Thursday 26 March 2026 04:11:53 +0000 (0:00:03.923) 0:00:52.347 ******** 2026-03-26 04:13:43.593558 | orchestrator | changed: [localhost] 2026-03-26 04:13:43.593570 | orchestrator | 2026-03-26 04:13:43.593582 | orchestrator | TASK [Create test keypair] ***************************************************** 2026-03-26 04:13:43.593594 | orchestrator | Thursday 26 March 2026 04:11:57 +0000 (0:00:04.091) 0:00:56.438 ******** 2026-03-26 04:13:43.593606 | orchestrator | changed: [localhost] 2026-03-26 04:13:43.593619 | orchestrator | 2026-03-26 04:13:43.593631 | orchestrator | TASK [Create test network] ***************************************************** 2026-03-26 04:13:43.593644 | orchestrator | Thursday 26 March 2026 04:12:01 +0000 (0:00:03.894) 0:01:00.332 ******** 2026-03-26 04:13:43.593666 | orchestrator | changed: [localhost] 2026-03-26 04:13:43.593679 | orchestrator | 2026-03-26 04:13:43.593691 | orchestrator | TASK [Create test subnet] ****************************************************** 2026-03-26 04:13:43.593703 | orchestrator | Thursday 26 March 2026 04:12:06 +0000 (0:00:04.908) 0:01:05.241 ******** 2026-03-26 04:13:43.593714 | orchestrator | changed: [localhost] 2026-03-26 04:13:43.593724 | orchestrator | 2026-03-26 04:13:43.593735 | orchestrator | TASK [Create test router] ****************************************************** 2026-03-26 04:13:43.593746 | orchestrator | Thursday 26 March 2026 04:12:12 +0000 (0:00:05.809) 0:01:11.050 ******** 2026-03-26 04:13:43.593757 | orchestrator | changed: [localhost] 2026-03-26 04:13:43.593767 | orchestrator | 2026-03-26 04:13:43.593778 | orchestrator | PLAY [Manage test instances and volumes] *************************************** 2026-03-26 04:13:43.593789 | orchestrator | 2026-03-26 04:13:43.593800 | orchestrator | TASK [Get test server group] *************************************************** 2026-03-26 04:13:43.593811 | orchestrator | Thursday 26 March 2026 04:12:23 +0000 (0:00:11.547) 0:01:22.598 ******** 2026-03-26 04:13:43.593822 | orchestrator | ok: [localhost] 2026-03-26 04:13:43.593833 | orchestrator | 2026-03-26 04:13:43.593844 | orchestrator | TASK [Detach test volume] ****************************************************** 2026-03-26 04:13:43.593860 | orchestrator | Thursday 26 March 2026 04:12:27 +0000 (0:00:03.550) 0:01:26.149 ******** 2026-03-26 04:13:43.593871 | orchestrator | skipping: [localhost] 2026-03-26 04:13:43.593950 | orchestrator | 2026-03-26 04:13:43.593962 | orchestrator | TASK [Delete test volume] ****************************************************** 2026-03-26 04:13:43.593973 | orchestrator | Thursday 26 March 2026 04:12:27 +0000 (0:00:00.058) 0:01:26.207 ******** 2026-03-26 04:13:43.593984 | orchestrator | skipping: [localhost] 2026-03-26 04:13:43.593995 | orchestrator | 2026-03-26 04:13:43.594006 | orchestrator | TASK [Delete test instances] *************************************************** 2026-03-26 04:13:43.594076 | orchestrator | Thursday 26 March 2026 04:12:27 +0000 (0:00:00.055) 0:01:26.263 ******** 2026-03-26 04:13:43.594088 | orchestrator | skipping: [localhost] => (item=test-4)  2026-03-26 04:13:43.594100 | orchestrator | skipping: [localhost] => (item=test-3)  2026-03-26 04:13:43.594130 | orchestrator | skipping: [localhost] => (item=test-2)  2026-03-26 04:13:43.594142 | orchestrator | skipping: [localhost] => (item=test-1)  2026-03-26 04:13:43.594153 | orchestrator | skipping: [localhost] => (item=test)  2026-03-26 04:13:43.594164 | orchestrator | skipping: [localhost] 2026-03-26 04:13:43.594186 | orchestrator | 2026-03-26 04:13:43.594197 | orchestrator | TASK [Wait for instance deletion to complete] ********************************** 2026-03-26 04:13:43.594208 | orchestrator | Thursday 26 March 2026 04:12:27 +0000 (0:00:00.157) 0:01:26.420 ******** 2026-03-26 04:13:43.594219 | orchestrator | skipping: [localhost] 2026-03-26 04:13:43.594229 | orchestrator | 2026-03-26 04:13:43.594240 | orchestrator | TASK [Create test instances] *************************************************** 2026-03-26 04:13:43.594251 | orchestrator | Thursday 26 March 2026 04:12:27 +0000 (0:00:00.152) 0:01:26.572 ******** 2026-03-26 04:13:43.594262 | orchestrator | changed: [localhost] => (item=test) 2026-03-26 04:13:43.594272 | orchestrator | changed: [localhost] => (item=test-1) 2026-03-26 04:13:43.594283 | orchestrator | changed: [localhost] => (item=test-2) 2026-03-26 04:13:43.594294 | orchestrator | changed: [localhost] => (item=test-3) 2026-03-26 04:13:43.594305 | orchestrator | changed: [localhost] => (item=test-4) 2026-03-26 04:13:43.594316 | orchestrator | 2026-03-26 04:13:43.594327 | orchestrator | TASK [Wait for instance creation to complete] ********************************** 2026-03-26 04:13:43.594338 | orchestrator | Thursday 26 March 2026 04:12:32 +0000 (0:00:04.784) 0:01:31.357 ******** 2026-03-26 04:13:43.594348 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (60 retries left). 2026-03-26 04:13:43.594360 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (59 retries left). 2026-03-26 04:13:43.594371 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (58 retries left). 2026-03-26 04:13:43.594382 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (57 retries left). 2026-03-26 04:13:43.594403 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j538260717191.3753', 'results_file': '/ansible/.ansible_async/j538260717191.3753', 'changed': True, 'item': 'test', 'ansible_loop_var': 'item'}) 2026-03-26 04:13:43.594417 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (60 retries left). 2026-03-26 04:13:43.594429 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j899561745004.3778', 'results_file': '/ansible/.ansible_async/j899561745004.3778', 'changed': True, 'item': 'test-1', 'ansible_loop_var': 'item'}) 2026-03-26 04:13:43.594440 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j588604499832.3803', 'results_file': '/ansible/.ansible_async/j588604499832.3803', 'changed': True, 'item': 'test-2', 'ansible_loop_var': 'item'}) 2026-03-26 04:13:43.594452 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j405640357128.3828', 'results_file': '/ansible/.ansible_async/j405640357128.3828', 'changed': True, 'item': 'test-3', 'ansible_loop_var': 'item'}) 2026-03-26 04:13:43.594462 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j525201259332.3853', 'results_file': '/ansible/.ansible_async/j525201259332.3853', 'changed': True, 'item': 'test-4', 'ansible_loop_var': 'item'}) 2026-03-26 04:13:43.594472 | orchestrator | 2026-03-26 04:13:43.594481 | orchestrator | TASK [Add metadata to instances] *********************************************** 2026-03-26 04:13:43.594491 | orchestrator | Thursday 26 March 2026 04:13:29 +0000 (0:00:57.245) 0:02:28.602 ******** 2026-03-26 04:13:43.594501 | orchestrator | changed: [localhost] => (item=test) 2026-03-26 04:13:43.594510 | orchestrator | changed: [localhost] => (item=test-1) 2026-03-26 04:13:43.594520 | orchestrator | changed: [localhost] => (item=test-2) 2026-03-26 04:13:43.594530 | orchestrator | changed: [localhost] => (item=test-3) 2026-03-26 04:13:43.594539 | orchestrator | changed: [localhost] => (item=test-4) 2026-03-26 04:13:43.594549 | orchestrator | 2026-03-26 04:13:43.594559 | orchestrator | TASK [Wait for metadata to be added] ******************************************* 2026-03-26 04:13:43.594568 | orchestrator | Thursday 26 March 2026 04:13:34 +0000 (0:00:04.390) 0:02:32.993 ******** 2026-03-26 04:13:43.594578 | orchestrator | FAILED - RETRYING: [localhost]: Wait for metadata to be added (30 retries left). 2026-03-26 04:13:43.594602 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j984712504400.3965', 'results_file': '/ansible/.ansible_async/j984712504400.3965', 'changed': True, 'item': 'test', 'ansible_loop_var': 'item'}) 2026-03-26 04:13:43.594621 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j533774217712.3990', 'results_file': '/ansible/.ansible_async/j533774217712.3990', 'changed': True, 'item': 'test-1', 'ansible_loop_var': 'item'}) 2026-03-26 04:13:43.594639 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j898183755290.4015', 'results_file': '/ansible/.ansible_async/j898183755290.4015', 'changed': True, 'item': 'test-2', 'ansible_loop_var': 'item'}) 2026-03-26 04:13:43.594665 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j768463755450.4040', 'results_file': '/ansible/.ansible_async/j768463755450.4040', 'changed': True, 'item': 'test-3', 'ansible_loop_var': 'item'}) 2026-03-26 04:14:23.135995 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j698557361688.4065', 'results_file': '/ansible/.ansible_async/j698557361688.4065', 'changed': True, 'item': 'test-4', 'ansible_loop_var': 'item'}) 2026-03-26 04:14:23.136093 | orchestrator | 2026-03-26 04:14:23.136105 | orchestrator | TASK [Add tag to instances] **************************************************** 2026-03-26 04:14:23.136131 | orchestrator | Thursday 26 March 2026 04:13:43 +0000 (0:00:09.467) 0:02:42.461 ******** 2026-03-26 04:14:23.136138 | orchestrator | changed: [localhost] => (item=test) 2026-03-26 04:14:23.136147 | orchestrator | changed: [localhost] => (item=test-1) 2026-03-26 04:14:23.136154 | orchestrator | changed: [localhost] => (item=test-2) 2026-03-26 04:14:23.136161 | orchestrator | changed: [localhost] => (item=test-3) 2026-03-26 04:14:23.136168 | orchestrator | changed: [localhost] => (item=test-4) 2026-03-26 04:14:23.136175 | orchestrator | 2026-03-26 04:14:23.136182 | orchestrator | TASK [Wait for tags to be added] *********************************************** 2026-03-26 04:14:23.136189 | orchestrator | Thursday 26 March 2026 04:13:48 +0000 (0:00:04.613) 0:02:47.075 ******** 2026-03-26 04:14:23.136196 | orchestrator | FAILED - RETRYING: [localhost]: Wait for tags to be added (30 retries left). 2026-03-26 04:14:23.136204 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j766959703281.4134', 'results_file': '/ansible/.ansible_async/j766959703281.4134', 'changed': True, 'item': 'test', 'ansible_loop_var': 'item'}) 2026-03-26 04:14:23.136212 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j744976519201.4159', 'results_file': '/ansible/.ansible_async/j744976519201.4159', 'changed': True, 'item': 'test-1', 'ansible_loop_var': 'item'}) 2026-03-26 04:14:23.136219 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j468878930547.4191', 'results_file': '/ansible/.ansible_async/j468878930547.4191', 'changed': True, 'item': 'test-2', 'ansible_loop_var': 'item'}) 2026-03-26 04:14:23.136226 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j481909587613.4217', 'results_file': '/ansible/.ansible_async/j481909587613.4217', 'changed': True, 'item': 'test-3', 'ansible_loop_var': 'item'}) 2026-03-26 04:14:23.136232 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j4746500476.4243', 'results_file': '/ansible/.ansible_async/j4746500476.4243', 'changed': True, 'item': 'test-4', 'ansible_loop_var': 'item'}) 2026-03-26 04:14:23.136239 | orchestrator | 2026-03-26 04:14:23.136246 | orchestrator | TASK [Create test volume] ****************************************************** 2026-03-26 04:14:23.136253 | orchestrator | Thursday 26 March 2026 04:13:57 +0000 (0:00:09.731) 0:02:56.806 ******** 2026-03-26 04:14:23.136259 | orchestrator | changed: [localhost] 2026-03-26 04:14:23.136266 | orchestrator | 2026-03-26 04:14:23.136273 | orchestrator | TASK [Attach test volume] ****************************************************** 2026-03-26 04:14:23.136280 | orchestrator | Thursday 26 March 2026 04:14:04 +0000 (0:00:06.098) 0:03:02.904 ******** 2026-03-26 04:14:23.136286 | orchestrator | changed: [localhost] 2026-03-26 04:14:23.136293 | orchestrator | 2026-03-26 04:14:23.136300 | orchestrator | TASK [Create floating ip address] ********************************************** 2026-03-26 04:14:23.136306 | orchestrator | Thursday 26 March 2026 04:14:17 +0000 (0:00:13.445) 0:03:16.350 ******** 2026-03-26 04:14:23.136313 | orchestrator | ok: [localhost] 2026-03-26 04:14:23.136321 | orchestrator | 2026-03-26 04:14:23.136327 | orchestrator | TASK [Print floating ip address] *********************************************** 2026-03-26 04:14:23.136334 | orchestrator | Thursday 26 March 2026 04:14:22 +0000 (0:00:05.342) 0:03:21.693 ******** 2026-03-26 04:14:23.136341 | orchestrator | ok: [localhost] => { 2026-03-26 04:14:23.136348 | orchestrator |  "msg": "192.168.112.156" 2026-03-26 04:14:23.136355 | orchestrator | } 2026-03-26 04:14:23.136362 | orchestrator | 2026-03-26 04:14:23.136368 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-26 04:14:23.136376 | orchestrator | localhost : ok=26  changed=23  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-26 04:14:23.136384 | orchestrator | 2026-03-26 04:14:23.136391 | orchestrator | 2026-03-26 04:14:23.136410 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-26 04:14:23.136422 | orchestrator | Thursday 26 March 2026 04:14:22 +0000 (0:00:00.050) 0:03:21.744 ******** 2026-03-26 04:14:23.136429 | orchestrator | =============================================================================== 2026-03-26 04:14:23.136436 | orchestrator | Wait for instance creation to complete --------------------------------- 57.25s 2026-03-26 04:14:23.136443 | orchestrator | Attach test volume ----------------------------------------------------- 13.45s 2026-03-26 04:14:23.136450 | orchestrator | Create test router ----------------------------------------------------- 11.55s 2026-03-26 04:14:23.136456 | orchestrator | Add member roles to user test ------------------------------------------ 11.42s 2026-03-26 04:14:23.136463 | orchestrator | Wait for tags to be added ----------------------------------------------- 9.73s 2026-03-26 04:14:23.136470 | orchestrator | Wait for metadata to be added ------------------------------------------- 9.47s 2026-03-26 04:14:23.136476 | orchestrator | Add manager role to user test-admin ------------------------------------- 6.52s 2026-03-26 04:14:23.136497 | orchestrator | Create test volume ------------------------------------------------------ 6.10s 2026-03-26 04:14:23.136504 | orchestrator | Create test subnet ------------------------------------------------------ 5.81s 2026-03-26 04:14:23.136511 | orchestrator | Create floating ip address ---------------------------------------------- 5.34s 2026-03-26 04:14:23.136518 | orchestrator | Create test server group ------------------------------------------------ 5.10s 2026-03-26 04:14:23.136524 | orchestrator | Create ssh security group ----------------------------------------------- 4.93s 2026-03-26 04:14:23.136531 | orchestrator | Create test network ----------------------------------------------------- 4.91s 2026-03-26 04:14:23.136537 | orchestrator | Create test instances --------------------------------------------------- 4.78s 2026-03-26 04:14:23.136544 | orchestrator | Add tag to instances ---------------------------------------------------- 4.61s 2026-03-26 04:14:23.136551 | orchestrator | Add metadata to instances ----------------------------------------------- 4.39s 2026-03-26 04:14:23.136557 | orchestrator | Add rule to ssh security group ------------------------------------------ 4.32s 2026-03-26 04:14:23.136565 | orchestrator | Create test user -------------------------------------------------------- 4.19s 2026-03-26 04:14:23.136571 | orchestrator | Create test-admin user -------------------------------------------------- 4.14s 2026-03-26 04:14:23.136578 | orchestrator | Add rule to icmp security group ----------------------------------------- 4.09s 2026-03-26 04:14:23.420888 | orchestrator | + server_list 2026-03-26 04:14:23.421017 | orchestrator | + openstack --os-cloud test server list 2026-03-26 04:14:27.230110 | orchestrator | +--------------------------------------+--------+--------+--------------------------------------+--------------------------+----------+ 2026-03-26 04:14:27.230215 | orchestrator | | ID | Name | Status | Networks | Image | Flavor | 2026-03-26 04:14:27.230229 | orchestrator | +--------------------------------------+--------+--------+--------------------------------------+--------------------------+----------+ 2026-03-26 04:14:27.230241 | orchestrator | | 9f84049c-743f-4895-8425-3c6d6b59650a | test-4 | ACTIVE | test=192.168.112.180, 192.168.200.63 | N/A (booted from volume) | SCS-1L-1 | 2026-03-26 04:14:27.230252 | orchestrator | | a86d3bc5-e53a-421d-bf49-90bac35835eb | test-3 | ACTIVE | test=192.168.112.146, 192.168.200.91 | N/A (booted from volume) | SCS-1L-1 | 2026-03-26 04:14:27.230263 | orchestrator | | 72fea215-b4a0-47d7-a5f6-1c164d423e5f | test | ACTIVE | test=192.168.112.156, 192.168.200.3 | N/A (booted from volume) | SCS-1L-1 | 2026-03-26 04:14:27.230274 | orchestrator | | f05d3527-1651-466e-b2c0-b2e9034c1f08 | test-2 | ACTIVE | test=192.168.112.181, 192.168.200.17 | N/A (booted from volume) | SCS-1L-1 | 2026-03-26 04:14:27.230285 | orchestrator | | f6a0beab-2de6-40d7-8e1c-30e7df59b6ae | test-1 | ACTIVE | test=192.168.112.117, 192.168.200.89 | N/A (booted from volume) | SCS-1L-1 | 2026-03-26 04:14:27.230297 | orchestrator | +--------------------------------------+--------+--------+--------------------------------------+--------------------------+----------+ 2026-03-26 04:14:27.482845 | orchestrator | + openstack --os-cloud test server show test 2026-03-26 04:14:30.732302 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-26 04:14:30.732423 | orchestrator | | Field | Value | 2026-03-26 04:14:30.732450 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-26 04:14:30.732464 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-03-26 04:14:30.732476 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-03-26 04:14:30.732487 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-03-26 04:14:30.732499 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test | 2026-03-26 04:14:30.732511 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-03-26 04:14:30.732522 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-03-26 04:14:30.732553 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-03-26 04:14:30.732590 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-03-26 04:14:30.732630 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-03-26 04:14:30.732643 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-03-26 04:14:30.732654 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-03-26 04:14:30.732666 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-03-26 04:14:30.732678 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-03-26 04:14:30.732689 | orchestrator | | OS-EXT-STS:task_state | None | 2026-03-26 04:14:30.732702 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-03-26 04:14:30.732714 | orchestrator | | OS-SRV-USG:launched_at | 2026-03-26T04:13:03.000000 | 2026-03-26 04:14:30.732748 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-03-26 04:14:30.732770 | orchestrator | | accessIPv4 | | 2026-03-26 04:14:30.732782 | orchestrator | | accessIPv6 | | 2026-03-26 04:14:30.732798 | orchestrator | | addresses | test=192.168.112.156, 192.168.200.3 | 2026-03-26 04:14:30.732810 | orchestrator | | config_drive | | 2026-03-26 04:14:30.732823 | orchestrator | | created | 2026-03-26T04:12:37Z | 2026-03-26 04:14:30.732837 | orchestrator | | description | None | 2026-03-26 04:14:30.732850 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-03-26 04:14:30.732863 | orchestrator | | hostId | 3c4a154524be0044d9af6f6e2ac04bbe090637876489341638c46842 | 2026-03-26 04:14:30.732884 | orchestrator | | host_status | None | 2026-03-26 04:14:30.732904 | orchestrator | | id | 72fea215-b4a0-47d7-a5f6-1c164d423e5f | 2026-03-26 04:14:30.732917 | orchestrator | | image | N/A (booted from volume) | 2026-03-26 04:14:30.732959 | orchestrator | | key_name | test | 2026-03-26 04:14:30.732978 | orchestrator | | locked | False | 2026-03-26 04:14:30.732992 | orchestrator | | locked_reason | None | 2026-03-26 04:14:30.733005 | orchestrator | | name | test | 2026-03-26 04:14:30.733018 | orchestrator | | pinned_availability_zone | None | 2026-03-26 04:14:30.733031 | orchestrator | | progress | 0 | 2026-03-26 04:14:30.733051 | orchestrator | | project_id | 77d2c6f1cc6541f3bbb616e01134d7dd | 2026-03-26 04:14:30.733064 | orchestrator | | properties | hostname='test' | 2026-03-26 04:14:30.733084 | orchestrator | | security_groups | name='ssh' | 2026-03-26 04:14:30.733098 | orchestrator | | | name='icmp' | 2026-03-26 04:14:30.733116 | orchestrator | | server_groups | None | 2026-03-26 04:14:30.733129 | orchestrator | | status | ACTIVE | 2026-03-26 04:14:30.733142 | orchestrator | | tags | test | 2026-03-26 04:14:30.733155 | orchestrator | | trusted_image_certificates | None | 2026-03-26 04:14:30.733169 | orchestrator | | updated | 2026-03-26T04:13:36Z | 2026-03-26 04:14:30.733182 | orchestrator | | user_id | f5786063540c4b40a3a24028a0f17712 | 2026-03-26 04:14:30.733206 | orchestrator | | volumes_attached | delete_on_termination='True', id='41d770c6-35d8-40d7-98a6-bf5de323a1d4' | 2026-03-26 04:14:30.733218 | orchestrator | | | delete_on_termination='False', id='c0b1f93f-9c66-4f28-98bf-725944fcdd59' | 2026-03-26 04:14:30.735120 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-26 04:14:30.999311 | orchestrator | + openstack --os-cloud test server show test-1 2026-03-26 04:14:34.517196 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-26 04:14:34.517323 | orchestrator | | Field | Value | 2026-03-26 04:14:34.517340 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-26 04:14:34.517353 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-03-26 04:14:34.517365 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-03-26 04:14:34.517377 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-03-26 04:14:34.517410 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-1 | 2026-03-26 04:14:34.517423 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-03-26 04:14:34.517434 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-03-26 04:14:34.517465 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-03-26 04:14:34.517478 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-03-26 04:14:34.517494 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-03-26 04:14:34.517524 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-03-26 04:14:34.517536 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-03-26 04:14:34.517547 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-03-26 04:14:34.517567 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-03-26 04:14:34.517579 | orchestrator | | OS-EXT-STS:task_state | None | 2026-03-26 04:14:34.517591 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-03-26 04:14:34.517603 | orchestrator | | OS-SRV-USG:launched_at | 2026-03-26T04:13:05.000000 | 2026-03-26 04:14:34.517623 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-03-26 04:14:34.517636 | orchestrator | | accessIPv4 | | 2026-03-26 04:14:34.517648 | orchestrator | | accessIPv6 | | 2026-03-26 04:14:34.517660 | orchestrator | | addresses | test=192.168.112.117, 192.168.200.89 | 2026-03-26 04:14:34.517671 | orchestrator | | config_drive | | 2026-03-26 04:14:34.517690 | orchestrator | | created | 2026-03-26T04:12:37Z | 2026-03-26 04:14:34.517701 | orchestrator | | description | None | 2026-03-26 04:14:34.517713 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-03-26 04:14:34.517725 | orchestrator | | hostId | 3c4a154524be0044d9af6f6e2ac04bbe090637876489341638c46842 | 2026-03-26 04:14:34.517739 | orchestrator | | host_status | None | 2026-03-26 04:14:34.517761 | orchestrator | | id | f6a0beab-2de6-40d7-8e1c-30e7df59b6ae | 2026-03-26 04:14:34.517782 | orchestrator | | image | N/A (booted from volume) | 2026-03-26 04:14:34.517801 | orchestrator | | key_name | test | 2026-03-26 04:14:34.517815 | orchestrator | | locked | False | 2026-03-26 04:14:34.517835 | orchestrator | | locked_reason | None | 2026-03-26 04:14:34.517849 | orchestrator | | name | test-1 | 2026-03-26 04:14:34.517862 | orchestrator | | pinned_availability_zone | None | 2026-03-26 04:14:34.517874 | orchestrator | | progress | 0 | 2026-03-26 04:14:34.517888 | orchestrator | | project_id | 77d2c6f1cc6541f3bbb616e01134d7dd | 2026-03-26 04:14:34.517901 | orchestrator | | properties | hostname='test-1' | 2026-03-26 04:14:34.517922 | orchestrator | | security_groups | name='ssh' | 2026-03-26 04:14:34.517959 | orchestrator | | | name='icmp' | 2026-03-26 04:14:34.517977 | orchestrator | | server_groups | None | 2026-03-26 04:14:34.517991 | orchestrator | | status | ACTIVE | 2026-03-26 04:14:34.518011 | orchestrator | | tags | test | 2026-03-26 04:14:34.518080 | orchestrator | | trusted_image_certificates | None | 2026-03-26 04:14:34.518120 | orchestrator | | updated | 2026-03-26T04:13:36Z | 2026-03-26 04:14:34.518132 | orchestrator | | user_id | f5786063540c4b40a3a24028a0f17712 | 2026-03-26 04:14:34.518143 | orchestrator | | volumes_attached | delete_on_termination='True', id='8241a54e-f4c8-4656-ba7f-0cc497b28890' | 2026-03-26 04:14:34.521675 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-26 04:14:34.774763 | orchestrator | + openstack --os-cloud test server show test-2 2026-03-26 04:14:37.869680 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-26 04:14:37.869769 | orchestrator | | Field | Value | 2026-03-26 04:14:37.869786 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-26 04:14:37.869811 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-03-26 04:14:37.869820 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-03-26 04:14:37.869827 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-03-26 04:14:37.869836 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-2 | 2026-03-26 04:14:37.869844 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-03-26 04:14:37.869852 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-03-26 04:14:37.869874 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-03-26 04:14:37.869883 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-03-26 04:14:37.869894 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-03-26 04:14:37.869906 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-03-26 04:14:37.869914 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-03-26 04:14:37.869922 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-03-26 04:14:37.870006 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-03-26 04:14:37.870084 | orchestrator | | OS-EXT-STS:task_state | None | 2026-03-26 04:14:37.870095 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-03-26 04:14:37.870103 | orchestrator | | OS-SRV-USG:launched_at | 2026-03-26T04:13:04.000000 | 2026-03-26 04:14:37.870119 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-03-26 04:14:37.870127 | orchestrator | | accessIPv4 | | 2026-03-26 04:14:37.870156 | orchestrator | | accessIPv6 | | 2026-03-26 04:14:37.870165 | orchestrator | | addresses | test=192.168.112.181, 192.168.200.17 | 2026-03-26 04:14:37.870181 | orchestrator | | config_drive | | 2026-03-26 04:14:37.870189 | orchestrator | | created | 2026-03-26T04:12:37Z | 2026-03-26 04:14:37.870197 | orchestrator | | description | None | 2026-03-26 04:14:37.870204 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-03-26 04:14:37.870212 | orchestrator | | hostId | 674ca42258a0b4c8c8031bac2c2e2fbf6c22c7834e3bc95678cd74b8 | 2026-03-26 04:14:37.870219 | orchestrator | | host_status | None | 2026-03-26 04:14:37.870232 | orchestrator | | id | f05d3527-1651-466e-b2c0-b2e9034c1f08 | 2026-03-26 04:14:37.870247 | orchestrator | | image | N/A (booted from volume) | 2026-03-26 04:14:37.870259 | orchestrator | | key_name | test | 2026-03-26 04:14:37.870268 | orchestrator | | locked | False | 2026-03-26 04:14:37.870277 | orchestrator | | locked_reason | None | 2026-03-26 04:14:37.870286 | orchestrator | | name | test-2 | 2026-03-26 04:14:37.870294 | orchestrator | | pinned_availability_zone | None | 2026-03-26 04:14:37.870303 | orchestrator | | progress | 0 | 2026-03-26 04:14:37.870312 | orchestrator | | project_id | 77d2c6f1cc6541f3bbb616e01134d7dd | 2026-03-26 04:14:37.870321 | orchestrator | | properties | hostname='test-2' | 2026-03-26 04:14:37.870336 | orchestrator | | security_groups | name='ssh' | 2026-03-26 04:14:37.870349 | orchestrator | | | name='icmp' | 2026-03-26 04:14:37.870358 | orchestrator | | server_groups | None | 2026-03-26 04:14:37.870367 | orchestrator | | status | ACTIVE | 2026-03-26 04:14:37.870376 | orchestrator | | tags | test | 2026-03-26 04:14:37.870385 | orchestrator | | trusted_image_certificates | None | 2026-03-26 04:14:37.870394 | orchestrator | | updated | 2026-03-26T04:13:36Z | 2026-03-26 04:14:37.870403 | orchestrator | | user_id | f5786063540c4b40a3a24028a0f17712 | 2026-03-26 04:14:37.870411 | orchestrator | | volumes_attached | delete_on_termination='True', id='28433d04-ce59-4919-8a32-660a14b3fb94' | 2026-03-26 04:14:37.873745 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-26 04:14:38.138627 | orchestrator | + openstack --os-cloud test server show test-3 2026-03-26 04:14:41.139715 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-26 04:14:41.139792 | orchestrator | | Field | Value | 2026-03-26 04:14:41.139801 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-26 04:14:41.139810 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-03-26 04:14:41.139819 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-03-26 04:14:41.139829 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-03-26 04:14:41.139839 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-3 | 2026-03-26 04:14:41.139847 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-03-26 04:14:41.139857 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-03-26 04:14:41.139912 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-03-26 04:14:41.139919 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-03-26 04:14:41.139927 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-03-26 04:14:41.139980 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-03-26 04:14:41.139988 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-03-26 04:14:41.139994 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-03-26 04:14:41.139999 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-03-26 04:14:41.140005 | orchestrator | | OS-EXT-STS:task_state | None | 2026-03-26 04:14:41.140010 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-03-26 04:14:41.140022 | orchestrator | | OS-SRV-USG:launched_at | 2026-03-26T04:13:06.000000 | 2026-03-26 04:14:41.140037 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-03-26 04:14:41.140043 | orchestrator | | accessIPv4 | | 2026-03-26 04:14:41.140049 | orchestrator | | accessIPv6 | | 2026-03-26 04:14:41.140054 | orchestrator | | addresses | test=192.168.112.146, 192.168.200.91 | 2026-03-26 04:14:41.140060 | orchestrator | | config_drive | | 2026-03-26 04:14:41.140066 | orchestrator | | created | 2026-03-26T04:12:38Z | 2026-03-26 04:14:41.140071 | orchestrator | | description | None | 2026-03-26 04:14:41.140077 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-03-26 04:14:41.140083 | orchestrator | | hostId | 674ca42258a0b4c8c8031bac2c2e2fbf6c22c7834e3bc95678cd74b8 | 2026-03-26 04:14:41.140092 | orchestrator | | host_status | None | 2026-03-26 04:14:41.140106 | orchestrator | | id | a86d3bc5-e53a-421d-bf49-90bac35835eb | 2026-03-26 04:14:41.140112 | orchestrator | | image | N/A (booted from volume) | 2026-03-26 04:14:41.140117 | orchestrator | | key_name | test | 2026-03-26 04:14:41.140123 | orchestrator | | locked | False | 2026-03-26 04:14:41.140129 | orchestrator | | locked_reason | None | 2026-03-26 04:14:41.140135 | orchestrator | | name | test-3 | 2026-03-26 04:14:41.140140 | orchestrator | | pinned_availability_zone | None | 2026-03-26 04:14:41.140146 | orchestrator | | progress | 0 | 2026-03-26 04:14:41.140155 | orchestrator | | project_id | 77d2c6f1cc6541f3bbb616e01134d7dd | 2026-03-26 04:14:41.140161 | orchestrator | | properties | hostname='test-3' | 2026-03-26 04:14:41.140174 | orchestrator | | security_groups | name='ssh' | 2026-03-26 04:14:41.140180 | orchestrator | | | name='icmp' | 2026-03-26 04:14:41.140186 | orchestrator | | server_groups | None | 2026-03-26 04:14:41.140215 | orchestrator | | status | ACTIVE | 2026-03-26 04:14:41.140221 | orchestrator | | tags | test | 2026-03-26 04:14:41.140227 | orchestrator | | trusted_image_certificates | None | 2026-03-26 04:14:41.140233 | orchestrator | | updated | 2026-03-26T04:13:37Z | 2026-03-26 04:14:41.140243 | orchestrator | | user_id | f5786063540c4b40a3a24028a0f17712 | 2026-03-26 04:14:41.140248 | orchestrator | | volumes_attached | delete_on_termination='True', id='7ff6c181-375d-43be-8b24-6e9effb98e3b' | 2026-03-26 04:14:41.141418 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-26 04:14:41.406480 | orchestrator | + openstack --os-cloud test server show test-4 2026-03-26 04:14:44.493313 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-26 04:14:44.493425 | orchestrator | | Field | Value | 2026-03-26 04:14:44.493443 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-26 04:14:44.493455 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-03-26 04:14:44.493467 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-03-26 04:14:44.493479 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-03-26 04:14:44.493511 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-4 | 2026-03-26 04:14:44.493524 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-03-26 04:14:44.493535 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-03-26 04:14:44.493571 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-03-26 04:14:44.493585 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-03-26 04:14:44.493596 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-03-26 04:14:44.493607 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-03-26 04:14:44.493619 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-03-26 04:14:44.493631 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-03-26 04:14:44.493649 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-03-26 04:14:44.493661 | orchestrator | | OS-EXT-STS:task_state | None | 2026-03-26 04:14:44.493672 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-03-26 04:14:44.493683 | orchestrator | | OS-SRV-USG:launched_at | 2026-03-26T04:13:06.000000 | 2026-03-26 04:14:44.493703 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-03-26 04:14:44.493714 | orchestrator | | accessIPv4 | | 2026-03-26 04:14:44.493726 | orchestrator | | accessIPv6 | | 2026-03-26 04:14:44.493737 | orchestrator | | addresses | test=192.168.112.180, 192.168.200.63 | 2026-03-26 04:14:44.493749 | orchestrator | | config_drive | | 2026-03-26 04:14:44.493760 | orchestrator | | created | 2026-03-26T04:12:40Z | 2026-03-26 04:14:44.493779 | orchestrator | | description | None | 2026-03-26 04:14:44.493858 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-03-26 04:14:44.493880 | orchestrator | | hostId | ee0f370abe6f4901d5b81d816f3256dda7c87709752fbf58b38a05ee | 2026-03-26 04:14:44.493893 | orchestrator | | host_status | None | 2026-03-26 04:14:44.493920 | orchestrator | | id | 9f84049c-743f-4895-8425-3c6d6b59650a | 2026-03-26 04:14:44.493934 | orchestrator | | image | N/A (booted from volume) | 2026-03-26 04:14:44.493975 | orchestrator | | key_name | test | 2026-03-26 04:14:44.493989 | orchestrator | | locked | False | 2026-03-26 04:14:44.494002 | orchestrator | | locked_reason | None | 2026-03-26 04:14:44.494110 | orchestrator | | name | test-4 | 2026-03-26 04:14:44.494127 | orchestrator | | pinned_availability_zone | None | 2026-03-26 04:14:44.494164 | orchestrator | | progress | 0 | 2026-03-26 04:14:44.494179 | orchestrator | | project_id | 77d2c6f1cc6541f3bbb616e01134d7dd | 2026-03-26 04:14:44.494192 | orchestrator | | properties | hostname='test-4' | 2026-03-26 04:14:44.494219 | orchestrator | | security_groups | name='ssh' | 2026-03-26 04:14:44.494232 | orchestrator | | | name='icmp' | 2026-03-26 04:14:44.494244 | orchestrator | | server_groups | None | 2026-03-26 04:14:44.494255 | orchestrator | | status | ACTIVE | 2026-03-26 04:14:44.494275 | orchestrator | | tags | test | 2026-03-26 04:14:44.494287 | orchestrator | | trusted_image_certificates | None | 2026-03-26 04:14:44.494299 | orchestrator | | updated | 2026-03-26T04:13:38Z | 2026-03-26 04:14:44.494310 | orchestrator | | user_id | f5786063540c4b40a3a24028a0f17712 | 2026-03-26 04:14:44.494322 | orchestrator | | volumes_attached | delete_on_termination='True', id='2af11777-ad16-40e3-aa06-7eb5dd4b823c' | 2026-03-26 04:14:44.506792 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-03-26 04:14:44.764281 | orchestrator | + server_ping 2026-03-26 04:14:44.765265 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-03-26 04:14:44.765320 | orchestrator | ++ tr -d '\r' 2026-03-26 04:14:47.670310 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-26 04:14:47.670394 | orchestrator | + ping -c3 192.168.112.146 2026-03-26 04:14:47.685805 | orchestrator | PING 192.168.112.146 (192.168.112.146) 56(84) bytes of data. 2026-03-26 04:14:47.685862 | orchestrator | 64 bytes from 192.168.112.146: icmp_seq=1 ttl=63 time=7.13 ms 2026-03-26 04:14:48.683080 | orchestrator | 64 bytes from 192.168.112.146: icmp_seq=2 ttl=63 time=2.30 ms 2026-03-26 04:14:49.684643 | orchestrator | 64 bytes from 192.168.112.146: icmp_seq=3 ttl=63 time=1.91 ms 2026-03-26 04:14:49.684749 | orchestrator | 2026-03-26 04:14:49.684765 | orchestrator | --- 192.168.112.146 ping statistics --- 2026-03-26 04:14:49.684776 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-26 04:14:49.684902 | orchestrator | rtt min/avg/max/mdev = 1.905/3.779/7.133/2.376 ms 2026-03-26 04:14:49.684928 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-26 04:14:49.684939 | orchestrator | + ping -c3 192.168.112.117 2026-03-26 04:14:49.698235 | orchestrator | PING 192.168.112.117 (192.168.112.117) 56(84) bytes of data. 2026-03-26 04:14:49.698312 | orchestrator | 64 bytes from 192.168.112.117: icmp_seq=1 ttl=63 time=8.63 ms 2026-03-26 04:14:50.694280 | orchestrator | 64 bytes from 192.168.112.117: icmp_seq=2 ttl=63 time=2.41 ms 2026-03-26 04:14:51.694708 | orchestrator | 64 bytes from 192.168.112.117: icmp_seq=3 ttl=63 time=1.84 ms 2026-03-26 04:14:51.694810 | orchestrator | 2026-03-26 04:14:51.694827 | orchestrator | --- 192.168.112.117 ping statistics --- 2026-03-26 04:14:51.694841 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-26 04:14:51.694852 | orchestrator | rtt min/avg/max/mdev = 1.836/4.292/8.629/3.075 ms 2026-03-26 04:14:51.695895 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-26 04:14:51.695919 | orchestrator | + ping -c3 192.168.112.181 2026-03-26 04:14:51.705263 | orchestrator | PING 192.168.112.181 (192.168.112.181) 56(84) bytes of data. 2026-03-26 04:14:51.705300 | orchestrator | 64 bytes from 192.168.112.181: icmp_seq=1 ttl=63 time=5.35 ms 2026-03-26 04:14:52.704360 | orchestrator | 64 bytes from 192.168.112.181: icmp_seq=2 ttl=63 time=2.32 ms 2026-03-26 04:14:53.706156 | orchestrator | 64 bytes from 192.168.112.181: icmp_seq=3 ttl=63 time=2.04 ms 2026-03-26 04:14:53.706258 | orchestrator | 2026-03-26 04:14:53.706275 | orchestrator | --- 192.168.112.181 ping statistics --- 2026-03-26 04:14:53.706288 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-26 04:14:53.706300 | orchestrator | rtt min/avg/max/mdev = 2.039/3.237/5.352/1.499 ms 2026-03-26 04:14:53.706311 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-26 04:14:53.706333 | orchestrator | + ping -c3 192.168.112.156 2026-03-26 04:14:53.719758 | orchestrator | PING 192.168.112.156 (192.168.112.156) 56(84) bytes of data. 2026-03-26 04:14:53.719818 | orchestrator | 64 bytes from 192.168.112.156: icmp_seq=1 ttl=63 time=8.77 ms 2026-03-26 04:14:54.715531 | orchestrator | 64 bytes from 192.168.112.156: icmp_seq=2 ttl=63 time=2.27 ms 2026-03-26 04:14:55.716777 | orchestrator | 64 bytes from 192.168.112.156: icmp_seq=3 ttl=63 time=1.64 ms 2026-03-26 04:14:55.716873 | orchestrator | 2026-03-26 04:14:55.716889 | orchestrator | --- 192.168.112.156 ping statistics --- 2026-03-26 04:14:55.716902 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-26 04:14:55.716914 | orchestrator | rtt min/avg/max/mdev = 1.641/4.227/8.773/3.224 ms 2026-03-26 04:14:55.716926 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-03-26 04:14:55.716938 | orchestrator | + ping -c3 192.168.112.180 2026-03-26 04:14:55.728467 | orchestrator | PING 192.168.112.180 (192.168.112.180) 56(84) bytes of data. 2026-03-26 04:14:55.728521 | orchestrator | 64 bytes from 192.168.112.180: icmp_seq=1 ttl=63 time=7.43 ms 2026-03-26 04:14:56.725883 | orchestrator | 64 bytes from 192.168.112.180: icmp_seq=2 ttl=63 time=2.92 ms 2026-03-26 04:14:57.726672 | orchestrator | 64 bytes from 192.168.112.180: icmp_seq=3 ttl=63 time=1.78 ms 2026-03-26 04:14:57.726773 | orchestrator | 2026-03-26 04:14:57.726791 | orchestrator | --- 192.168.112.180 ping statistics --- 2026-03-26 04:14:57.726804 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-03-26 04:14:57.726814 | orchestrator | rtt min/avg/max/mdev = 1.782/4.043/7.431/2.439 ms 2026-03-26 04:14:57.726825 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-03-26 04:14:57.947168 | orchestrator | ok: Runtime: 0:10:52.001766 2026-03-26 04:14:58.017340 | 2026-03-26 04:14:58.017525 | TASK [Run tempest] 2026-03-26 04:14:58.552111 | orchestrator | skipping: Conditional result was False 2026-03-26 04:14:58.570523 | 2026-03-26 04:14:58.570699 | TASK [Check prometheus alert status] 2026-03-26 04:14:59.106812 | orchestrator | skipping: Conditional result was False 2026-03-26 04:14:59.120987 | 2026-03-26 04:14:59.121138 | PLAY [Upgrade testbed] 2026-03-26 04:14:59.132518 | 2026-03-26 04:14:59.132628 | TASK [Print next ceph version] 2026-03-26 04:14:59.213759 | orchestrator | ok 2026-03-26 04:14:59.224653 | 2026-03-26 04:14:59.224860 | TASK [Print next openstack version] 2026-03-26 04:14:59.305814 | orchestrator | ok 2026-03-26 04:14:59.317917 | 2026-03-26 04:14:59.318042 | TASK [Print next manager version] 2026-03-26 04:14:59.386575 | orchestrator | ok 2026-03-26 04:14:59.402891 | 2026-03-26 04:14:59.403101 | TASK [Set cloud fact (Zuul deployment)] 2026-03-26 04:14:59.456028 | orchestrator | ok 2026-03-26 04:14:59.467563 | 2026-03-26 04:14:59.467682 | TASK [Set cloud fact (local deployment)] 2026-03-26 04:14:59.504245 | orchestrator | skipping: Conditional result was False 2026-03-26 04:14:59.521278 | 2026-03-26 04:14:59.521423 | TASK [Fetch manager address] 2026-03-26 04:14:59.806104 | orchestrator | ok 2026-03-26 04:14:59.816661 | 2026-03-26 04:14:59.816809 | TASK [Set manager_host address] 2026-03-26 04:14:59.897059 | orchestrator | ok 2026-03-26 04:14:59.908132 | 2026-03-26 04:14:59.908250 | TASK [Run upgrade] 2026-03-26 04:15:00.589721 | orchestrator | + set -e 2026-03-26 04:15:00.590071 | orchestrator | + export MANAGER_VERSION=10.0.0-rc.1 2026-03-26 04:15:00.590117 | orchestrator | + MANAGER_VERSION=10.0.0-rc.1 2026-03-26 04:15:00.590148 | orchestrator | + CEPH_VERSION=reef 2026-03-26 04:15:00.590170 | orchestrator | + OPENSTACK_VERSION=2024.2 2026-03-26 04:15:00.590191 | orchestrator | + KOLLA_NAMESPACE=kolla/release 2026-03-26 04:15:00.590219 | orchestrator | + sh -c '/opt/configuration/scripts/upgrade-manager.sh 10.0.0-rc.1 reef 2024.2 kolla/release' 2026-03-26 04:15:00.597777 | orchestrator | + set -e 2026-03-26 04:15:00.597884 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-26 04:15:00.597910 | orchestrator | ++ export INTERACTIVE=false 2026-03-26 04:15:00.597938 | orchestrator | ++ INTERACTIVE=false 2026-03-26 04:15:00.597957 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-26 04:15:00.598123 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-26 04:15:00.599187 | orchestrator | ++ docker inspect --format '{{ index .Config.Labels "org.opencontainers.image.version"}}' osism-ansible 2026-03-26 04:15:00.635495 | orchestrator | + OLD_MANAGER_VERSION=v0.20251130.0 2026-03-26 04:15:00.636112 | orchestrator | ++ docker inspect --format '{{ index .Config.Labels "de.osism.release.openstack"}}' kolla-ansible 2026-03-26 04:15:00.675513 | orchestrator | 2026-03-26 04:15:00.675627 | orchestrator | # UPGRADE MANAGER 2026-03-26 04:15:00.675658 | orchestrator | 2026-03-26 04:15:00.675678 | orchestrator | + OLD_OPENSTACK_VERSION=2024.2 2026-03-26 04:15:00.675697 | orchestrator | + echo 2026-03-26 04:15:00.675717 | orchestrator | + echo '# UPGRADE MANAGER' 2026-03-26 04:15:00.675738 | orchestrator | + echo 2026-03-26 04:15:00.675758 | orchestrator | + export MANAGER_VERSION=10.0.0-rc.1 2026-03-26 04:15:00.675780 | orchestrator | + MANAGER_VERSION=10.0.0-rc.1 2026-03-26 04:15:00.675796 | orchestrator | + CEPH_VERSION=reef 2026-03-26 04:15:00.675807 | orchestrator | + OPENSTACK_VERSION=2024.2 2026-03-26 04:15:00.675818 | orchestrator | + KOLLA_NAMESPACE=kolla/release 2026-03-26 04:15:00.675829 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 10.0.0-rc.1 2026-03-26 04:15:00.683347 | orchestrator | + set -e 2026-03-26 04:15:00.683437 | orchestrator | + VERSION=10.0.0-rc.1 2026-03-26 04:15:00.683452 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 10.0.0-rc.1/g' /opt/configuration/environments/manager/configuration.yml 2026-03-26 04:15:00.689918 | orchestrator | + [[ 10.0.0-rc.1 != \l\a\t\e\s\t ]] 2026-03-26 04:15:00.690114 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2026-03-26 04:15:00.695336 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2026-03-26 04:15:00.699310 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2026-03-26 04:15:00.705814 | orchestrator | /opt/configuration ~ 2026-03-26 04:15:00.705869 | orchestrator | + set -e 2026-03-26 04:15:00.705879 | orchestrator | + pushd /opt/configuration 2026-03-26 04:15:00.705888 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-26 04:15:00.705898 | orchestrator | + source /opt/venv/bin/activate 2026-03-26 04:15:00.706822 | orchestrator | ++ deactivate nondestructive 2026-03-26 04:15:00.706950 | orchestrator | ++ '[' -n '' ']' 2026-03-26 04:15:00.707000 | orchestrator | ++ '[' -n '' ']' 2026-03-26 04:15:00.707750 | orchestrator | ++ hash -r 2026-03-26 04:15:00.707796 | orchestrator | ++ '[' -n '' ']' 2026-03-26 04:15:00.707803 | orchestrator | ++ unset VIRTUAL_ENV 2026-03-26 04:15:00.707808 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-03-26 04:15:00.707812 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-03-26 04:15:00.707818 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-03-26 04:15:00.707822 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-03-26 04:15:00.707826 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-03-26 04:15:00.707830 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-03-26 04:15:00.707835 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-26 04:15:00.707840 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-26 04:15:00.707844 | orchestrator | ++ export PATH 2026-03-26 04:15:00.707848 | orchestrator | ++ '[' -n '' ']' 2026-03-26 04:15:00.707852 | orchestrator | ++ '[' -z '' ']' 2026-03-26 04:15:00.707855 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-03-26 04:15:00.707859 | orchestrator | ++ PS1='(venv) ' 2026-03-26 04:15:00.707863 | orchestrator | ++ export PS1 2026-03-26 04:15:00.707867 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-03-26 04:15:00.707871 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-03-26 04:15:00.707875 | orchestrator | ++ hash -r 2026-03-26 04:15:00.707882 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2026-03-26 04:15:01.829035 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2026-03-26 04:15:01.830065 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.33.0) 2026-03-26 04:15:01.831589 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2026-03-26 04:15:01.832910 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.3) 2026-03-26 04:15:01.834240 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (26.0) 2026-03-26 04:15:01.844147 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.3.1) 2026-03-26 04:15:01.845562 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2026-03-26 04:15:01.846778 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2026-03-26 04:15:01.848305 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2026-03-26 04:15:01.894709 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.6) 2026-03-26 04:15:01.897195 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.11) 2026-03-26 04:15:01.898763 | orchestrator | Requirement already satisfied: urllib3<3,>=1.26 in /opt/venv/lib/python3.12/site-packages (from requests) (2.6.3) 2026-03-26 04:15:01.900221 | orchestrator | Requirement already satisfied: certifi>=2023.5.7 in /opt/venv/lib/python3.12/site-packages (from requests) (2026.2.25) 2026-03-26 04:15:01.904228 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.3) 2026-03-26 04:15:02.166138 | orchestrator | ++ which gilt 2026-03-26 04:15:02.167780 | orchestrator | + GILT=/opt/venv/bin/gilt 2026-03-26 04:15:02.167810 | orchestrator | + /opt/venv/bin/gilt overlay 2026-03-26 04:15:02.451591 | orchestrator | osism.cfg-generics: 2026-03-26 04:15:02.552203 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2026-03-26 04:15:02.552945 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2026-03-26 04:15:02.553910 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2026-03-26 04:15:02.554055 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2026-03-26 04:15:03.603368 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2026-03-26 04:15:03.617214 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2026-03-26 04:15:03.995695 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2026-03-26 04:15:04.061415 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-26 04:15:04.061484 | orchestrator | + deactivate 2026-03-26 04:15:04.061491 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-03-26 04:15:04.061497 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-26 04:15:04.061502 | orchestrator | + export PATH 2026-03-26 04:15:04.061506 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-03-26 04:15:04.061510 | orchestrator | + '[' -n '' ']' 2026-03-26 04:15:04.061514 | orchestrator | + hash -r 2026-03-26 04:15:04.061518 | orchestrator | + '[' -n '' ']' 2026-03-26 04:15:04.061522 | orchestrator | + unset VIRTUAL_ENV 2026-03-26 04:15:04.061526 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-03-26 04:15:04.061530 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-03-26 04:15:04.061534 | orchestrator | + unset -f deactivate 2026-03-26 04:15:04.061538 | orchestrator | + popd 2026-03-26 04:15:04.061541 | orchestrator | ~ 2026-03-26 04:15:04.063422 | orchestrator | + [[ 10.0.0-rc.1 == \l\a\t\e\s\t ]] 2026-03-26 04:15:04.063489 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla/release 2026-03-26 04:15:04.068062 | orchestrator | + set -e 2026-03-26 04:15:04.068296 | orchestrator | + NAMESPACE=kolla/release 2026-03-26 04:15:04.068308 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla/release#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-03-26 04:15:04.077410 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2026-03-26 04:15:04.083749 | orchestrator | /opt/configuration ~ 2026-03-26 04:15:04.083793 | orchestrator | + set -e 2026-03-26 04:15:04.083803 | orchestrator | + pushd /opt/configuration 2026-03-26 04:15:04.083811 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-26 04:15:04.083819 | orchestrator | + source /opt/venv/bin/activate 2026-03-26 04:15:04.083832 | orchestrator | ++ deactivate nondestructive 2026-03-26 04:15:04.083908 | orchestrator | ++ '[' -n '' ']' 2026-03-26 04:15:04.083917 | orchestrator | ++ '[' -n '' ']' 2026-03-26 04:15:04.083923 | orchestrator | ++ hash -r 2026-03-26 04:15:04.084055 | orchestrator | ++ '[' -n '' ']' 2026-03-26 04:15:04.084064 | orchestrator | ++ unset VIRTUAL_ENV 2026-03-26 04:15:04.084071 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-03-26 04:15:04.084078 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-03-26 04:15:04.084307 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-03-26 04:15:04.084405 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-03-26 04:15:04.084423 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-03-26 04:15:04.084441 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-03-26 04:15:04.084453 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-26 04:15:04.084478 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-26 04:15:04.084490 | orchestrator | ++ export PATH 2026-03-26 04:15:04.084501 | orchestrator | ++ '[' -n '' ']' 2026-03-26 04:15:04.084512 | orchestrator | ++ '[' -z '' ']' 2026-03-26 04:15:04.084523 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-03-26 04:15:04.084533 | orchestrator | ++ PS1='(venv) ' 2026-03-26 04:15:04.084544 | orchestrator | ++ export PS1 2026-03-26 04:15:04.084554 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-03-26 04:15:04.084565 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-03-26 04:15:04.084576 | orchestrator | ++ hash -r 2026-03-26 04:15:04.084906 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2026-03-26 04:15:04.591367 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2026-03-26 04:15:04.591698 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.33.0) 2026-03-26 04:15:04.593114 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2026-03-26 04:15:04.594576 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.3) 2026-03-26 04:15:04.595881 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (26.0) 2026-03-26 04:15:04.605882 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.3.1) 2026-03-26 04:15:04.607653 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2026-03-26 04:15:04.608710 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2026-03-26 04:15:04.610271 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2026-03-26 04:15:04.646106 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.6) 2026-03-26 04:15:04.647891 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.11) 2026-03-26 04:15:04.650317 | orchestrator | Requirement already satisfied: urllib3<3,>=1.26 in /opt/venv/lib/python3.12/site-packages (from requests) (2.6.3) 2026-03-26 04:15:04.651260 | orchestrator | Requirement already satisfied: certifi>=2023.5.7 in /opt/venv/lib/python3.12/site-packages (from requests) (2026.2.25) 2026-03-26 04:15:04.655512 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.3) 2026-03-26 04:15:04.868104 | orchestrator | ++ which gilt 2026-03-26 04:15:04.870916 | orchestrator | + GILT=/opt/venv/bin/gilt 2026-03-26 04:15:04.871002 | orchestrator | + /opt/venv/bin/gilt overlay 2026-03-26 04:15:05.040282 | orchestrator | osism.cfg-generics: 2026-03-26 04:15:05.116049 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2026-03-26 04:15:05.116176 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2026-03-26 04:15:05.116204 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2026-03-26 04:15:05.116227 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2026-03-26 04:15:05.789476 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2026-03-26 04:15:05.798811 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2026-03-26 04:15:06.132325 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2026-03-26 04:15:06.193102 | orchestrator | ~ 2026-03-26 04:15:06.193201 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-26 04:15:06.193217 | orchestrator | + deactivate 2026-03-26 04:15:06.193253 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-03-26 04:15:06.193264 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-26 04:15:06.193274 | orchestrator | + export PATH 2026-03-26 04:15:06.193283 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-03-26 04:15:06.193292 | orchestrator | + '[' -n '' ']' 2026-03-26 04:15:06.193301 | orchestrator | + hash -r 2026-03-26 04:15:06.193310 | orchestrator | + '[' -n '' ']' 2026-03-26 04:15:06.193319 | orchestrator | + unset VIRTUAL_ENV 2026-03-26 04:15:06.193328 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-03-26 04:15:06.193337 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-03-26 04:15:06.193346 | orchestrator | + unset -f deactivate 2026-03-26 04:15:06.193355 | orchestrator | + popd 2026-03-26 04:15:06.195560 | orchestrator | ++ semver v0.20251130.0 6.0.0 2026-03-26 04:15:06.261084 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-26 04:15:06.261571 | orchestrator | ++ semver 10.0.0-rc.1 10.0.0-0 2026-03-26 04:15:06.348748 | orchestrator | + [[ 1 -ge 0 ]] 2026-03-26 04:15:06.348866 | orchestrator | + sed -i '/^om_enable_rabbitmq_high_availability:/d' /opt/configuration/environments/kolla/configuration.yml 2026-03-26 04:15:06.355053 | orchestrator | + sed -i '/^om_enable_rabbitmq_quorum_queues:/d' /opt/configuration/environments/kolla/configuration.yml 2026-03-26 04:15:06.362003 | orchestrator | +++ semver v0.20251130.0 9.5.0 2026-03-26 04:15:06.425445 | orchestrator | ++ '[' -1 -le 0 ']' 2026-03-26 04:15:06.425615 | orchestrator | +++ semver 10.0.0-rc.1 10.0.0-0 2026-03-26 04:15:06.497030 | orchestrator | ++ '[' 1 -ge 0 ']' 2026-03-26 04:15:06.497135 | orchestrator | ++ echo true 2026-03-26 04:15:06.497152 | orchestrator | + MANAGER_UPGRADE_CROSSES_10=true 2026-03-26 04:15:06.497600 | orchestrator | +++ semver 2024.2 2024.2 2026-03-26 04:15:06.546194 | orchestrator | ++ '[' 0 -le 0 ']' 2026-03-26 04:15:06.546296 | orchestrator | +++ semver 2024.2 2025.1 2026-03-26 04:15:06.583233 | orchestrator | ++ '[' -1 -ge 0 ']' 2026-03-26 04:15:06.583310 | orchestrator | ++ echo false 2026-03-26 04:15:06.583335 | orchestrator | + OPENSTACK_UPGRADE_CROSSES_2025=false 2026-03-26 04:15:06.583479 | orchestrator | + [[ true == \t\r\u\e ]] 2026-03-26 04:15:06.583500 | orchestrator | + echo 'om_rpc_vhost: openstack' 2026-03-26 04:15:06.583615 | orchestrator | + echo 'om_notify_vhost: openstack' 2026-03-26 04:15:06.583687 | orchestrator | + sed -i 's#manager_listener_broker_vhost: .*#manager_listener_broker_vhost: /openstack#g' /opt/configuration/environments/manager/configuration.yml 2026-03-26 04:15:06.587761 | orchestrator | + echo 'export RABBITMQ3TO4=true' 2026-03-26 04:15:06.587819 | orchestrator | + sudo tee -a /opt/manager-vars.sh 2026-03-26 04:15:06.605717 | orchestrator | export RABBITMQ3TO4=true 2026-03-26 04:15:06.607870 | orchestrator | + osism update manager 2026-03-26 04:15:11.920637 | orchestrator | Collecting uv 2026-03-26 04:15:12.025590 | orchestrator | Downloading uv-0.11.1-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (11 kB) 2026-03-26 04:15:12.042650 | orchestrator | Downloading uv-0.11.1-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (24.5 MB) 2026-03-26 04:15:12.964439 | orchestrator | ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 24.5/24.5 MB 33.8 MB/s eta 0:00:00 2026-03-26 04:15:13.032713 | orchestrator | Installing collected packages: uv 2026-03-26 04:15:13.516874 | orchestrator | Successfully installed uv-0.11.1 2026-03-26 04:15:14.207575 | orchestrator | Resolved 11 packages in 380ms 2026-03-26 04:15:14.241036 | orchestrator | Downloading netaddr (2.2MiB) 2026-03-26 04:15:14.241749 | orchestrator | Downloading cryptography (4.3MiB) 2026-03-26 04:15:14.241777 | orchestrator | Downloading ansible-core (2.1MiB) 2026-03-26 04:15:14.241790 | orchestrator | Downloading ansible (54.5MiB) 2026-03-26 04:15:14.578506 | orchestrator | Downloaded netaddr 2026-03-26 04:15:14.710420 | orchestrator | Downloaded cryptography 2026-03-26 04:15:14.734005 | orchestrator | Downloaded ansible-core 2026-03-26 04:15:21.612471 | orchestrator | Downloaded ansible 2026-03-26 04:15:21.612584 | orchestrator | Prepared 11 packages in 7.40s 2026-03-26 04:15:22.217305 | orchestrator | Installed 11 packages in 604ms 2026-03-26 04:15:22.217384 | orchestrator | + ansible==11.11.0 2026-03-26 04:15:22.217394 | orchestrator | + ansible-core==2.18.15 2026-03-26 04:15:22.217403 | orchestrator | + cffi==2.0.0 2026-03-26 04:15:22.217411 | orchestrator | + cryptography==46.0.6 2026-03-26 04:15:22.217419 | orchestrator | + jinja2==3.1.6 2026-03-26 04:15:22.217426 | orchestrator | + markupsafe==3.0.3 2026-03-26 04:15:22.217434 | orchestrator | + netaddr==1.3.0 2026-03-26 04:15:22.217441 | orchestrator | + packaging==26.0 2026-03-26 04:15:22.217448 | orchestrator | + pycparser==3.0 2026-03-26 04:15:22.217455 | orchestrator | + pyyaml==6.0.3 2026-03-26 04:15:22.217463 | orchestrator | + resolvelib==1.0.1 2026-03-26 04:15:23.301751 | orchestrator | Cloning into '/home/dragon/.ansible/tmp/ansible-local-203816n2qsv95p/tmpz7vqf5eh/ansible-collection-servicesarne1aad'... 2026-03-26 04:15:24.853480 | orchestrator | Your branch is up to date with 'origin/main'. 2026-03-26 04:15:24.853578 | orchestrator | Already on 'main' 2026-03-26 04:15:25.328462 | orchestrator | Starting galaxy collection install process 2026-03-26 04:15:25.328564 | orchestrator | Process install dependency map 2026-03-26 04:15:25.328579 | orchestrator | Starting collection install process 2026-03-26 04:15:25.328591 | orchestrator | Installing 'osism.services:999.0.0' to '/home/dragon/.ansible/collections/ansible_collections/osism/services' 2026-03-26 04:15:25.328603 | orchestrator | Created collection for osism.services:999.0.0 at /home/dragon/.ansible/collections/ansible_collections/osism/services 2026-03-26 04:15:25.328614 | orchestrator | osism.services:999.0.0 was installed successfully 2026-03-26 04:15:25.814405 | orchestrator | Cloning into '/home/dragon/.ansible/tmp/ansible-local-203833xiam2p3s/tmp84azf_u6/ansible-playbooks-managerqcge2toh'... 2026-03-26 04:15:26.375557 | orchestrator | Already on 'main' 2026-03-26 04:15:26.375653 | orchestrator | Your branch is up to date with 'origin/main'. 2026-03-26 04:15:26.637191 | orchestrator | Starting galaxy collection install process 2026-03-26 04:15:26.637305 | orchestrator | Process install dependency map 2026-03-26 04:15:26.637346 | orchestrator | Starting collection install process 2026-03-26 04:15:26.637374 | orchestrator | Installing 'osism.manager:999.0.0' to '/home/dragon/.ansible/collections/ansible_collections/osism/manager' 2026-03-26 04:15:26.637388 | orchestrator | Created collection for osism.manager:999.0.0 at /home/dragon/.ansible/collections/ansible_collections/osism/manager 2026-03-26 04:15:26.637401 | orchestrator | osism.manager:999.0.0 was installed successfully 2026-03-26 04:15:27.284446 | orchestrator | [WARNING]: Invalid characters were found in group names but not replaced, use 2026-03-26 04:15:27.284533 | orchestrator | -vvvv to see details 2026-03-26 04:15:27.686741 | orchestrator | 2026-03-26 04:15:27.686847 | orchestrator | PLAY [Apply role manager] ****************************************************** 2026-03-26 04:15:27.686870 | orchestrator | 2026-03-26 04:15:27.686890 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-26 04:15:31.493213 | orchestrator | ok: [testbed-manager] 2026-03-26 04:15:31.493320 | orchestrator | 2026-03-26 04:15:31.493338 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-03-26 04:15:31.567658 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-03-26 04:15:31.567748 | orchestrator | 2026-03-26 04:15:31.567786 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-03-26 04:15:33.316706 | orchestrator | ok: [testbed-manager] 2026-03-26 04:15:33.316809 | orchestrator | 2026-03-26 04:15:33.316825 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-03-26 04:15:33.369147 | orchestrator | ok: [testbed-manager] 2026-03-26 04:15:33.369241 | orchestrator | 2026-03-26 04:15:33.369255 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-03-26 04:15:33.429861 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-03-26 04:15:33.429947 | orchestrator | 2026-03-26 04:15:33.429961 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-03-26 04:15:37.673925 | orchestrator | ok: [testbed-manager] => (item=/opt/ansible) 2026-03-26 04:15:37.674238 | orchestrator | ok: [testbed-manager] => (item=/opt/archive) 2026-03-26 04:15:37.674286 | orchestrator | ok: [testbed-manager] => (item=/opt/manager/configuration) 2026-03-26 04:15:37.674312 | orchestrator | ok: [testbed-manager] => (item=/opt/manager/data) 2026-03-26 04:15:37.674323 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-03-26 04:15:37.674334 | orchestrator | ok: [testbed-manager] => (item=/opt/manager/secrets) 2026-03-26 04:15:37.674345 | orchestrator | ok: [testbed-manager] => (item=/opt/ansible/secrets) 2026-03-26 04:15:37.674355 | orchestrator | ok: [testbed-manager] => (item=/opt/state) 2026-03-26 04:15:37.674367 | orchestrator | 2026-03-26 04:15:37.674379 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-03-26 04:15:38.727695 | orchestrator | ok: [testbed-manager] 2026-03-26 04:15:38.727814 | orchestrator | 2026-03-26 04:15:38.727838 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-03-26 04:15:39.682130 | orchestrator | ok: [testbed-manager] 2026-03-26 04:15:39.682234 | orchestrator | 2026-03-26 04:15:39.682251 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-03-26 04:15:39.767270 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-03-26 04:15:39.767373 | orchestrator | 2026-03-26 04:15:39.767388 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-03-26 04:15:41.602873 | orchestrator | ok: [testbed-manager] => (item=ara) 2026-03-26 04:15:41.602951 | orchestrator | ok: [testbed-manager] => (item=ara-server) 2026-03-26 04:15:41.602959 | orchestrator | 2026-03-26 04:15:41.602964 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-03-26 04:15:42.536577 | orchestrator | ok: [testbed-manager] 2026-03-26 04:15:42.536688 | orchestrator | 2026-03-26 04:15:42.536714 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-03-26 04:15:42.592640 | orchestrator | skipping: [testbed-manager] 2026-03-26 04:15:42.592741 | orchestrator | 2026-03-26 04:15:42.592759 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-03-26 04:15:42.687561 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-03-26 04:15:42.687646 | orchestrator | 2026-03-26 04:15:42.687658 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-03-26 04:15:43.624814 | orchestrator | ok: [testbed-manager] 2026-03-26 04:15:43.624916 | orchestrator | 2026-03-26 04:15:43.624933 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-03-26 04:15:43.690925 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-03-26 04:15:43.691018 | orchestrator | 2026-03-26 04:15:43.691087 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-03-26 04:15:45.656332 | orchestrator | ok: [testbed-manager] => (item=None) 2026-03-26 04:15:45.656442 | orchestrator | ok: [testbed-manager] => (item=None) 2026-03-26 04:15:45.656457 | orchestrator | ok: [testbed-manager] 2026-03-26 04:15:45.656469 | orchestrator | 2026-03-26 04:15:45.656481 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-03-26 04:15:46.607203 | orchestrator | ok: [testbed-manager] 2026-03-26 04:15:46.607325 | orchestrator | 2026-03-26 04:15:46.607343 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-03-26 04:15:46.665277 | orchestrator | skipping: [testbed-manager] 2026-03-26 04:15:46.665374 | orchestrator | 2026-03-26 04:15:46.665387 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-03-26 04:15:46.774199 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-03-26 04:15:46.774295 | orchestrator | 2026-03-26 04:15:46.774311 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-03-26 04:15:47.409263 | orchestrator | ok: [testbed-manager] 2026-03-26 04:15:47.409365 | orchestrator | 2026-03-26 04:15:47.409380 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-03-26 04:15:47.974401 | orchestrator | ok: [testbed-manager] 2026-03-26 04:15:47.974503 | orchestrator | 2026-03-26 04:15:47.974520 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-03-26 04:15:49.804834 | orchestrator | ok: [testbed-manager] => (item=conductor) 2026-03-26 04:15:49.804962 | orchestrator | ok: [testbed-manager] => (item=openstack) 2026-03-26 04:15:49.804988 | orchestrator | 2026-03-26 04:15:49.805009 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-03-26 04:15:50.932545 | orchestrator | changed: [testbed-manager] 2026-03-26 04:15:50.932663 | orchestrator | 2026-03-26 04:15:50.932679 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-03-26 04:15:51.480848 | orchestrator | ok: [testbed-manager] 2026-03-26 04:15:51.480951 | orchestrator | 2026-03-26 04:15:51.480968 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-03-26 04:15:52.025265 | orchestrator | ok: [testbed-manager] 2026-03-26 04:15:52.025366 | orchestrator | 2026-03-26 04:15:52.025405 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-03-26 04:15:52.074486 | orchestrator | skipping: [testbed-manager] 2026-03-26 04:15:52.074587 | orchestrator | 2026-03-26 04:15:52.074603 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-03-26 04:15:52.140898 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-03-26 04:15:52.140989 | orchestrator | 2026-03-26 04:15:52.141004 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-03-26 04:15:52.177413 | orchestrator | ok: [testbed-manager] 2026-03-26 04:15:52.177455 | orchestrator | 2026-03-26 04:15:52.177468 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-03-26 04:15:55.011711 | orchestrator | ok: [testbed-manager] => (item=osism) 2026-03-26 04:15:55.011847 | orchestrator | ok: [testbed-manager] => (item=osism-update-docker) 2026-03-26 04:15:55.011863 | orchestrator | ok: [testbed-manager] => (item=osism-update-manager) 2026-03-26 04:15:55.011876 | orchestrator | 2026-03-26 04:15:55.011889 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-03-26 04:15:55.975889 | orchestrator | ok: [testbed-manager] 2026-03-26 04:15:55.975998 | orchestrator | 2026-03-26 04:15:55.976015 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-03-26 04:15:56.972666 | orchestrator | ok: [testbed-manager] 2026-03-26 04:15:56.972777 | orchestrator | 2026-03-26 04:15:56.972794 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-03-26 04:15:57.927334 | orchestrator | ok: [testbed-manager] 2026-03-26 04:15:57.927424 | orchestrator | 2026-03-26 04:15:57.927434 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-03-26 04:15:58.010149 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-03-26 04:15:58.010235 | orchestrator | 2026-03-26 04:15:58.010250 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-03-26 04:15:58.061557 | orchestrator | ok: [testbed-manager] 2026-03-26 04:15:58.061658 | orchestrator | 2026-03-26 04:15:58.061674 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-03-26 04:15:59.046900 | orchestrator | ok: [testbed-manager] => (item=osism-include) 2026-03-26 04:15:59.046983 | orchestrator | 2026-03-26 04:15:59.046995 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-03-26 04:15:59.122783 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-03-26 04:15:59.122879 | orchestrator | 2026-03-26 04:15:59.122894 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-03-26 04:16:00.102483 | orchestrator | ok: [testbed-manager] 2026-03-26 04:16:00.102579 | orchestrator | 2026-03-26 04:16:00.102593 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-03-26 04:16:01.170352 | orchestrator | ok: [testbed-manager] 2026-03-26 04:16:01.170430 | orchestrator | 2026-03-26 04:16:01.170440 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-03-26 04:16:01.260964 | orchestrator | skipping: [testbed-manager] 2026-03-26 04:16:01.261085 | orchestrator | 2026-03-26 04:16:01.261101 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-03-26 04:16:01.325931 | orchestrator | ok: [testbed-manager] 2026-03-26 04:16:01.326121 | orchestrator | 2026-03-26 04:16:01.326143 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-03-26 04:16:02.629293 | orchestrator | changed: [testbed-manager] 2026-03-26 04:16:02.629383 | orchestrator | 2026-03-26 04:16:02.629393 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-03-26 04:17:09.626307 | orchestrator | changed: [testbed-manager] 2026-03-26 04:17:09.626429 | orchestrator | 2026-03-26 04:17:09.626446 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-03-26 04:17:10.740456 | orchestrator | ok: [testbed-manager] 2026-03-26 04:17:10.740583 | orchestrator | 2026-03-26 04:17:10.740615 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-03-26 04:17:10.798489 | orchestrator | skipping: [testbed-manager] 2026-03-26 04:17:10.798594 | orchestrator | 2026-03-26 04:17:10.798609 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-03-26 04:17:11.594782 | orchestrator | ok: [testbed-manager] 2026-03-26 04:17:11.594879 | orchestrator | 2026-03-26 04:17:11.594893 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-03-26 04:17:11.659798 | orchestrator | skipping: [testbed-manager] 2026-03-26 04:17:11.659897 | orchestrator | 2026-03-26 04:17:11.659914 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-03-26 04:17:11.659927 | orchestrator | 2026-03-26 04:17:11.659939 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-03-26 04:17:26.446446 | orchestrator | changed: [testbed-manager] 2026-03-26 04:17:26.446562 | orchestrator | 2026-03-26 04:17:26.446580 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-03-26 04:18:26.503270 | orchestrator | Pausing for 60 seconds 2026-03-26 04:18:26.503391 | orchestrator | changed: [testbed-manager] 2026-03-26 04:18:26.503408 | orchestrator | 2026-03-26 04:18:26.503421 | orchestrator | RUNNING HANDLER [osism.services.manager : Register that manager service was restarted] *** 2026-03-26 04:18:26.561993 | orchestrator | ok: [testbed-manager] 2026-03-26 04:18:26.562140 | orchestrator | 2026-03-26 04:18:26.562154 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-03-26 04:18:30.053074 | orchestrator | changed: [testbed-manager] 2026-03-26 04:18:30.053184 | orchestrator | 2026-03-26 04:18:30.053259 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-03-26 04:19:32.777949 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-03-26 04:19:32.778117 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-03-26 04:19:32.778135 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (48 retries left). 2026-03-26 04:19:32.778149 | orchestrator | changed: [testbed-manager] 2026-03-26 04:19:32.778162 | orchestrator | 2026-03-26 04:19:32.778174 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-03-26 04:19:43.907026 | orchestrator | changed: [testbed-manager] 2026-03-26 04:19:43.907139 | orchestrator | 2026-03-26 04:19:43.907156 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-03-26 04:19:44.000071 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-03-26 04:19:44.000200 | orchestrator | 2026-03-26 04:19:44.000216 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-03-26 04:19:44.000229 | orchestrator | 2026-03-26 04:19:44.000240 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-03-26 04:19:44.072012 | orchestrator | skipping: [testbed-manager] 2026-03-26 04:19:44.072106 | orchestrator | 2026-03-26 04:19:44.072121 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-03-26 04:19:44.158377 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-03-26 04:19:44.158471 | orchestrator | 2026-03-26 04:19:44.158507 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-03-26 04:19:45.189358 | orchestrator | changed: [testbed-manager] 2026-03-26 04:19:45.189460 | orchestrator | 2026-03-26 04:19:45.189476 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-03-26 04:19:48.762344 | orchestrator | ok: [testbed-manager] 2026-03-26 04:19:48.762437 | orchestrator | 2026-03-26 04:19:48.762450 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-03-26 04:19:48.847829 | orchestrator | ok: [testbed-manager] => { 2026-03-26 04:19:48.847948 | orchestrator | "version_check_result.stdout_lines": [ 2026-03-26 04:19:48.847971 | orchestrator | "=== OSISM Container Version Check ===", 2026-03-26 04:19:48.847983 | orchestrator | "Checking running containers against expected versions...", 2026-03-26 04:19:48.847995 | orchestrator | "", 2026-03-26 04:19:48.848007 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-03-26 04:19:48.848018 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:0.20251208.0", 2026-03-26 04:19:48.848029 | orchestrator | " Enabled: true", 2026-03-26 04:19:48.848040 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:0.20251208.0", 2026-03-26 04:19:48.848050 | orchestrator | " Status: ✅ MATCH", 2026-03-26 04:19:48.848061 | orchestrator | "", 2026-03-26 04:19:48.848072 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-03-26 04:19:48.848083 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:0.20251208.0", 2026-03-26 04:19:48.848094 | orchestrator | " Enabled: true", 2026-03-26 04:19:48.848104 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:0.20251208.0", 2026-03-26 04:19:48.848115 | orchestrator | " Status: ✅ MATCH", 2026-03-26 04:19:48.848125 | orchestrator | "", 2026-03-26 04:19:48.848136 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-03-26 04:19:48.848146 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:0.20251208.0", 2026-03-26 04:19:48.848157 | orchestrator | " Enabled: true", 2026-03-26 04:19:48.848167 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:0.20251208.0", 2026-03-26 04:19:48.848178 | orchestrator | " Status: ✅ MATCH", 2026-03-26 04:19:48.848188 | orchestrator | "", 2026-03-26 04:19:48.848199 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-03-26 04:19:48.848210 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:0.20251208.0", 2026-03-26 04:19:48.848220 | orchestrator | " Enabled: true", 2026-03-26 04:19:48.848231 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:0.20251208.0", 2026-03-26 04:19:48.848241 | orchestrator | " Status: ✅ MATCH", 2026-03-26 04:19:48.848320 | orchestrator | "", 2026-03-26 04:19:48.848333 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-03-26 04:19:48.848344 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:0.20251208.0", 2026-03-26 04:19:48.848354 | orchestrator | " Enabled: true", 2026-03-26 04:19:48.848367 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:0.20251208.0", 2026-03-26 04:19:48.848380 | orchestrator | " Status: ✅ MATCH", 2026-03-26 04:19:48.848392 | orchestrator | "", 2026-03-26 04:19:48.848405 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-03-26 04:19:48.848444 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-03-26 04:19:48.848457 | orchestrator | " Enabled: true", 2026-03-26 04:19:48.848469 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-03-26 04:19:48.848482 | orchestrator | " Status: ✅ MATCH", 2026-03-26 04:19:48.848494 | orchestrator | "", 2026-03-26 04:19:48.848507 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-03-26 04:19:48.848519 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-03-26 04:19:48.848531 | orchestrator | " Enabled: true", 2026-03-26 04:19:48.848543 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-03-26 04:19:48.848556 | orchestrator | " Status: ✅ MATCH", 2026-03-26 04:19:48.848569 | orchestrator | "", 2026-03-26 04:19:48.848581 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-03-26 04:19:48.848593 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-03-26 04:19:48.848606 | orchestrator | " Enabled: true", 2026-03-26 04:19:48.848629 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-03-26 04:19:48.848641 | orchestrator | " Status: ✅ MATCH", 2026-03-26 04:19:48.848654 | orchestrator | "", 2026-03-26 04:19:48.848666 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-03-26 04:19:48.848679 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:0.20251208.0", 2026-03-26 04:19:48.848691 | orchestrator | " Enabled: true", 2026-03-26 04:19:48.848704 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:0.20251208.0", 2026-03-26 04:19:48.848716 | orchestrator | " Status: ✅ MATCH", 2026-03-26 04:19:48.848727 | orchestrator | "", 2026-03-26 04:19:48.848742 | orchestrator | "Checking service: redis (Redis Cache)", 2026-03-26 04:19:48.848753 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-03-26 04:19:48.848765 | orchestrator | " Enabled: true", 2026-03-26 04:19:48.848775 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-03-26 04:19:48.848786 | orchestrator | " Status: ✅ MATCH", 2026-03-26 04:19:48.848797 | orchestrator | "", 2026-03-26 04:19:48.848808 | orchestrator | "Checking service: api (OSISM API Service)", 2026-03-26 04:19:48.848818 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-03-26 04:19:48.848829 | orchestrator | " Enabled: true", 2026-03-26 04:19:48.848840 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-03-26 04:19:48.848850 | orchestrator | " Status: ✅ MATCH", 2026-03-26 04:19:48.848861 | orchestrator | "", 2026-03-26 04:19:48.848872 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-03-26 04:19:48.848882 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-03-26 04:19:48.848893 | orchestrator | " Enabled: true", 2026-03-26 04:19:48.848904 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-03-26 04:19:48.848914 | orchestrator | " Status: ✅ MATCH", 2026-03-26 04:19:48.848925 | orchestrator | "", 2026-03-26 04:19:48.848936 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-03-26 04:19:48.848946 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-03-26 04:19:48.848957 | orchestrator | " Enabled: true", 2026-03-26 04:19:48.848968 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-03-26 04:19:48.848978 | orchestrator | " Status: ✅ MATCH", 2026-03-26 04:19:48.848989 | orchestrator | "", 2026-03-26 04:19:48.849000 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-03-26 04:19:48.849011 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-03-26 04:19:48.849021 | orchestrator | " Enabled: true", 2026-03-26 04:19:48.849032 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-03-26 04:19:48.849063 | orchestrator | " Status: ✅ MATCH", 2026-03-26 04:19:48.849075 | orchestrator | "", 2026-03-26 04:19:48.849086 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-03-26 04:19:48.849096 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-03-26 04:19:48.849115 | orchestrator | " Enabled: true", 2026-03-26 04:19:48.849126 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-03-26 04:19:48.849136 | orchestrator | " Status: ✅ MATCH", 2026-03-26 04:19:48.849147 | orchestrator | "", 2026-03-26 04:19:48.849158 | orchestrator | "=== Summary ===", 2026-03-26 04:19:48.849169 | orchestrator | "Errors (version mismatches): 0", 2026-03-26 04:19:48.849181 | orchestrator | "Warnings (expected containers not running): 0", 2026-03-26 04:19:48.849200 | orchestrator | "", 2026-03-26 04:19:48.849220 | orchestrator | "✅ All running containers match expected versions!" 2026-03-26 04:19:48.849238 | orchestrator | ] 2026-03-26 04:19:48.849281 | orchestrator | } 2026-03-26 04:19:48.849293 | orchestrator | 2026-03-26 04:19:48.849304 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-03-26 04:19:48.913932 | orchestrator | skipping: [testbed-manager] 2026-03-26 04:19:48.914850 | orchestrator | 2026-03-26 04:19:48.914886 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-26 04:19:48.914896 | orchestrator | testbed-manager : ok=51 changed=9 unreachable=0 failed=0 skipped=8 rescued=0 ignored=0 2026-03-26 04:19:48.914903 | orchestrator | 2026-03-26 04:20:01.444090 | orchestrator | 2026-03-26 04:20:01 | INFO  | Task c34b26f2-f289-4f95-835a-bf861cbcec99 (sync inventory) is running in background. Output coming soon. 2026-03-26 04:20:30.069822 | orchestrator | 2026-03-26 04:20:03 | INFO  | Starting group_vars file reorganization 2026-03-26 04:20:30.069943 | orchestrator | 2026-03-26 04:20:03 | INFO  | Moved 0 file(s) to their respective directories 2026-03-26 04:20:30.069962 | orchestrator | 2026-03-26 04:20:03 | INFO  | Group_vars file reorganization completed 2026-03-26 04:20:30.069975 | orchestrator | 2026-03-26 04:20:05 | INFO  | Starting variable preparation from inventory 2026-03-26 04:20:30.069987 | orchestrator | 2026-03-26 04:20:08 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-03-26 04:20:30.069998 | orchestrator | 2026-03-26 04:20:08 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-03-26 04:20:30.070009 | orchestrator | 2026-03-26 04:20:08 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-03-26 04:20:30.070104 | orchestrator | 2026-03-26 04:20:08 | INFO  | 3 file(s) written, 6 host(s) processed 2026-03-26 04:20:30.070126 | orchestrator | 2026-03-26 04:20:08 | INFO  | Variable preparation completed 2026-03-26 04:20:30.070145 | orchestrator | 2026-03-26 04:20:10 | INFO  | Starting inventory overwrite handling 2026-03-26 04:20:30.070164 | orchestrator | 2026-03-26 04:20:10 | INFO  | Handling group overwrites in 99-overwrite 2026-03-26 04:20:30.070175 | orchestrator | 2026-03-26 04:20:10 | INFO  | Removing group frr:children from 60-generic 2026-03-26 04:20:30.070186 | orchestrator | 2026-03-26 04:20:10 | INFO  | Removing group netbird:children from 50-infrastructure 2026-03-26 04:20:30.070197 | orchestrator | 2026-03-26 04:20:10 | INFO  | Removing group ceph-mds from 50-ceph 2026-03-26 04:20:30.070208 | orchestrator | 2026-03-26 04:20:10 | INFO  | Removing group ceph-rgw from 50-ceph 2026-03-26 04:20:30.070219 | orchestrator | 2026-03-26 04:20:10 | INFO  | Handling group overwrites in 20-roles 2026-03-26 04:20:30.070229 | orchestrator | 2026-03-26 04:20:10 | INFO  | Removing group k3s_node from 50-infrastructure 2026-03-26 04:20:30.070240 | orchestrator | 2026-03-26 04:20:10 | INFO  | Removed 5 group(s) in total 2026-03-26 04:20:30.070251 | orchestrator | 2026-03-26 04:20:10 | INFO  | Inventory overwrite handling completed 2026-03-26 04:20:30.070262 | orchestrator | 2026-03-26 04:20:11 | INFO  | Starting merge of inventory files 2026-03-26 04:20:30.070273 | orchestrator | 2026-03-26 04:20:11 | INFO  | Inventory files merged successfully 2026-03-26 04:20:30.070392 | orchestrator | 2026-03-26 04:20:16 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-03-26 04:20:30.070406 | orchestrator | 2026-03-26 04:20:28 | INFO  | Successfully wrote ClusterShell configuration 2026-03-26 04:20:30.411379 | orchestrator | + [[ '' == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-03-26 04:20:30.411478 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-03-26 04:20:30.411494 | orchestrator | + local max_attempts=60 2026-03-26 04:20:30.411507 | orchestrator | + local name=kolla-ansible 2026-03-26 04:20:30.411519 | orchestrator | + local attempt_num=1 2026-03-26 04:20:30.411530 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-03-26 04:20:30.448826 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-26 04:20:30.448911 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-03-26 04:20:30.448925 | orchestrator | + local max_attempts=60 2026-03-26 04:20:30.448935 | orchestrator | + local name=osism-ansible 2026-03-26 04:20:30.448944 | orchestrator | + local attempt_num=1 2026-03-26 04:20:30.449679 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-03-26 04:20:30.482542 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-26 04:20:30.482622 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-03-26 04:20:30.683837 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-03-26 04:20:30.683930 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:0.20251208.0 "/entrypoint.sh osis…" ceph-ansible 3 minutes ago Up 2 minutes (healthy) 2026-03-26 04:20:30.683947 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:0.20251208.0 "/entrypoint.sh osis…" kolla-ansible 3 minutes ago Up 2 minutes (healthy) 2026-03-26 04:20:30.683980 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- osism…" api 3 minutes ago Up 3 minutes (healthy) 192.168.16.5:8000->8000/tcp 2026-03-26 04:20:30.683993 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server 2 hours ago Up 2 minutes (healthy) 8000/tcp 2026-03-26 04:20:30.684004 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- osism…" beat 3 minutes ago Up 3 minutes (healthy) 2026-03-26 04:20:30.684015 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- osism…" flower 3 minutes ago Up 3 minutes (healthy) 2026-03-26 04:20:30.684025 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:0.20251208.0 "/sbin/tini -- /entr…" inventory_reconciler 3 minutes ago Up 2 minutes (healthy) 2026-03-26 04:20:30.684036 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- osism…" listener 3 minutes ago Restarting (0) 21 seconds ago 2026-03-26 04:20:30.684047 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb 2 hours ago Up 3 minutes (healthy) 3306/tcp 2026-03-26 04:20:30.684057 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- osism…" openstack 3 minutes ago Up 3 minutes (healthy) 2026-03-26 04:20:30.684068 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis 2 hours ago Up 3 minutes (healthy) 6379/tcp 2026-03-26 04:20:30.684078 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:0.20251208.0 "/entrypoint.sh osis…" osism-ansible 3 minutes ago Up 2 minutes (healthy) 2026-03-26 04:20:30.684110 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:0.20251208.0 "docker-entrypoint.s…" frontend 3 minutes ago Up 3 minutes 192.168.16.5:3000->3000/tcp 2026-03-26 04:20:30.684121 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:0.20251208.0 "/entrypoint.sh osis…" osism-kubernetes 3 minutes ago Up 2 minutes (healthy) 2026-03-26 04:20:30.684132 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- sleep…" osismclient 3 minutes ago Up 3 minutes (healthy) 2026-03-26 04:20:30.689696 | orchestrator | + [[ '' == \t\r\u\e ]] 2026-03-26 04:20:30.689750 | orchestrator | + [[ '' == \f\a\l\s\e ]] 2026-03-26 04:20:30.689762 | orchestrator | + osism apply facts 2026-03-26 04:20:42.772894 | orchestrator | 2026-03-26 04:20:42 | INFO  | Task fe19af64-9cef-441d-8bc1-20715ae45ad3 (facts) was prepared for execution. 2026-03-26 04:20:42.772982 | orchestrator | 2026-03-26 04:20:42 | INFO  | It takes a moment until task fe19af64-9cef-441d-8bc1-20715ae45ad3 (facts) has been started and output is visible here. 2026-03-26 04:21:05.598249 | orchestrator | 2026-03-26 04:21:05.598344 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-03-26 04:21:05.598352 | orchestrator | 2026-03-26 04:21:05.598357 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-26 04:21:05.598361 | orchestrator | Thursday 26 March 2026 04:20:49 +0000 (0:00:01.971) 0:00:01.971 ******** 2026-03-26 04:21:05.598365 | orchestrator | ok: [testbed-manager] 2026-03-26 04:21:05.598371 | orchestrator | ok: [testbed-node-1] 2026-03-26 04:21:05.598375 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:21:05.598379 | orchestrator | ok: [testbed-node-2] 2026-03-26 04:21:05.598383 | orchestrator | ok: [testbed-node-3] 2026-03-26 04:21:05.598387 | orchestrator | ok: [testbed-node-4] 2026-03-26 04:21:05.598391 | orchestrator | ok: [testbed-node-5] 2026-03-26 04:21:05.598395 | orchestrator | 2026-03-26 04:21:05.598399 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-26 04:21:05.598403 | orchestrator | Thursday 26 March 2026 04:20:52 +0000 (0:00:03.524) 0:00:05.496 ******** 2026-03-26 04:21:05.598407 | orchestrator | skipping: [testbed-manager] 2026-03-26 04:21:05.598412 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:21:05.598416 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:21:05.598420 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:21:05.598424 | orchestrator | skipping: [testbed-node-3] 2026-03-26 04:21:05.598428 | orchestrator | skipping: [testbed-node-4] 2026-03-26 04:21:05.598432 | orchestrator | skipping: [testbed-node-5] 2026-03-26 04:21:05.598436 | orchestrator | 2026-03-26 04:21:05.598454 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-26 04:21:05.598459 | orchestrator | 2026-03-26 04:21:05.598463 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-26 04:21:05.598467 | orchestrator | Thursday 26 March 2026 04:20:55 +0000 (0:00:02.580) 0:00:08.077 ******** 2026-03-26 04:21:05.598472 | orchestrator | ok: [testbed-node-2] 2026-03-26 04:21:05.598476 | orchestrator | ok: [testbed-node-1] 2026-03-26 04:21:05.598481 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:21:05.598485 | orchestrator | ok: [testbed-manager] 2026-03-26 04:21:05.598489 | orchestrator | ok: [testbed-node-3] 2026-03-26 04:21:05.598494 | orchestrator | ok: [testbed-node-4] 2026-03-26 04:21:05.598498 | orchestrator | ok: [testbed-node-5] 2026-03-26 04:21:05.598502 | orchestrator | 2026-03-26 04:21:05.598507 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-26 04:21:05.598511 | orchestrator | 2026-03-26 04:21:05.598515 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-26 04:21:05.598520 | orchestrator | Thursday 26 March 2026 04:21:02 +0000 (0:00:07.088) 0:00:15.165 ******** 2026-03-26 04:21:05.598524 | orchestrator | skipping: [testbed-manager] 2026-03-26 04:21:05.598543 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:21:05.598548 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:21:05.598552 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:21:05.598556 | orchestrator | skipping: [testbed-node-3] 2026-03-26 04:21:05.598561 | orchestrator | skipping: [testbed-node-4] 2026-03-26 04:21:05.598565 | orchestrator | skipping: [testbed-node-5] 2026-03-26 04:21:05.598569 | orchestrator | 2026-03-26 04:21:05.598574 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-26 04:21:05.598578 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-26 04:21:05.598584 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-26 04:21:05.598588 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-26 04:21:05.598592 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-26 04:21:05.598597 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-26 04:21:05.598601 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-26 04:21:05.598605 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-26 04:21:05.598610 | orchestrator | 2026-03-26 04:21:05.598614 | orchestrator | 2026-03-26 04:21:05.598618 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-26 04:21:05.598623 | orchestrator | Thursday 26 March 2026 04:21:05 +0000 (0:00:02.682) 0:00:17.848 ******** 2026-03-26 04:21:05.598627 | orchestrator | =============================================================================== 2026-03-26 04:21:05.598631 | orchestrator | Gathers facts about hosts ----------------------------------------------- 7.09s 2026-03-26 04:21:05.598636 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 3.53s 2026-03-26 04:21:05.598640 | orchestrator | Gather facts for all hosts ---------------------------------------------- 2.68s 2026-03-26 04:21:05.598645 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 2.58s 2026-03-26 04:21:05.922538 | orchestrator | ++ semver 10.0.0-rc.1 10.0.0-0 2026-03-26 04:21:06.028848 | orchestrator | + [[ 1 -ge 0 ]] 2026-03-26 04:21:06.029845 | orchestrator | ++ docker inspect --format '{{ index .Config.Labels "de.osism.release.openstack"}}' kolla-ansible 2026-03-26 04:21:06.077134 | orchestrator | + OPENSTACK_VERSION=2025.1 2026-03-26 04:21:06.077212 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla/release/2025.1 2026-03-26 04:21:06.085187 | orchestrator | + set -e 2026-03-26 04:21:06.085262 | orchestrator | + NAMESPACE=kolla/release/2025.1 2026-03-26 04:21:06.085274 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla/release/2025.1#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-03-26 04:21:06.095292 | orchestrator | + sh -c /opt/configuration/scripts/upgrade-services.sh 2026-03-26 04:21:06.104288 | orchestrator | 2026-03-26 04:21:06.104370 | orchestrator | # UPGRADE SERVICES 2026-03-26 04:21:06.104379 | orchestrator | 2026-03-26 04:21:06.104386 | orchestrator | + set -e 2026-03-26 04:21:06.104394 | orchestrator | + echo 2026-03-26 04:21:06.104401 | orchestrator | + echo '# UPGRADE SERVICES' 2026-03-26 04:21:06.104408 | orchestrator | + echo 2026-03-26 04:21:06.104415 | orchestrator | + source /opt/manager-vars.sh 2026-03-26 04:21:06.104944 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-26 04:21:06.104963 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-26 04:21:06.104970 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-26 04:21:06.104976 | orchestrator | ++ CEPH_VERSION=reef 2026-03-26 04:21:06.104983 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-26 04:21:06.104991 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-26 04:21:06.105028 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-26 04:21:06.105098 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-26 04:21:06.105106 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-26 04:21:06.105114 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-26 04:21:06.105137 | orchestrator | ++ export ARA=false 2026-03-26 04:21:06.105240 | orchestrator | ++ ARA=false 2026-03-26 04:21:06.105248 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-26 04:21:06.105254 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-26 04:21:06.105260 | orchestrator | ++ export TEMPEST=false 2026-03-26 04:21:06.105266 | orchestrator | ++ TEMPEST=false 2026-03-26 04:21:06.105273 | orchestrator | ++ export IS_ZUUL=true 2026-03-26 04:21:06.105279 | orchestrator | ++ IS_ZUUL=true 2026-03-26 04:21:06.105287 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.54 2026-03-26 04:21:06.105294 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.54 2026-03-26 04:21:06.105326 | orchestrator | ++ export EXTERNAL_API=false 2026-03-26 04:21:06.105333 | orchestrator | ++ EXTERNAL_API=false 2026-03-26 04:21:06.105338 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-26 04:21:06.105344 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-26 04:21:06.105350 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-26 04:21:06.105356 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-26 04:21:06.105362 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-26 04:21:06.105368 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-26 04:21:06.105376 | orchestrator | ++ export RABBITMQ3TO4=true 2026-03-26 04:21:06.105382 | orchestrator | ++ RABBITMQ3TO4=true 2026-03-26 04:21:06.105388 | orchestrator | + SKIP_OPENSTACK_UPGRADE=false 2026-03-26 04:21:06.105395 | orchestrator | + SKIP_CEPH_UPGRADE=false 2026-03-26 04:21:06.105401 | orchestrator | + sh -c /opt/configuration/scripts/pull-images.sh 2026-03-26 04:21:06.115213 | orchestrator | + set -e 2026-03-26 04:21:06.115275 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-26 04:21:06.115862 | orchestrator | ++ export INTERACTIVE=false 2026-03-26 04:21:06.115908 | orchestrator | ++ INTERACTIVE=false 2026-03-26 04:21:06.115917 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-26 04:21:06.115925 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-26 04:21:06.116147 | orchestrator | + source /opt/manager-vars.sh 2026-03-26 04:21:06.116162 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-26 04:21:06.116168 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-26 04:21:06.116175 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-26 04:21:06.116182 | orchestrator | ++ CEPH_VERSION=reef 2026-03-26 04:21:06.116253 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-26 04:21:06.116263 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-26 04:21:06.116270 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-26 04:21:06.116277 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-26 04:21:06.116283 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-26 04:21:06.116290 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-26 04:21:06.116427 | orchestrator | ++ export ARA=false 2026-03-26 04:21:06.116438 | orchestrator | ++ ARA=false 2026-03-26 04:21:06.116445 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-26 04:21:06.116451 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-26 04:21:06.116457 | orchestrator | ++ export TEMPEST=false 2026-03-26 04:21:06.116464 | orchestrator | ++ TEMPEST=false 2026-03-26 04:21:06.116470 | orchestrator | ++ export IS_ZUUL=true 2026-03-26 04:21:06.116476 | orchestrator | ++ IS_ZUUL=true 2026-03-26 04:21:06.116484 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.54 2026-03-26 04:21:06.116502 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.54 2026-03-26 04:21:06.116509 | orchestrator | ++ export EXTERNAL_API=false 2026-03-26 04:21:06.116515 | orchestrator | ++ EXTERNAL_API=false 2026-03-26 04:21:06.116521 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-26 04:21:06.116527 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-26 04:21:06.116535 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-26 04:21:06.116541 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-26 04:21:06.116607 | orchestrator | 2026-03-26 04:21:06.116617 | orchestrator | # PULL IMAGES 2026-03-26 04:21:06.116624 | orchestrator | 2026-03-26 04:21:06.116630 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-26 04:21:06.116637 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-26 04:21:06.116643 | orchestrator | ++ export RABBITMQ3TO4=true 2026-03-26 04:21:06.116650 | orchestrator | ++ RABBITMQ3TO4=true 2026-03-26 04:21:06.116656 | orchestrator | + echo 2026-03-26 04:21:06.116663 | orchestrator | + echo '# PULL IMAGES' 2026-03-26 04:21:06.116669 | orchestrator | + echo 2026-03-26 04:21:06.117851 | orchestrator | ++ semver 9.5.0 7.0.0 2026-03-26 04:21:06.183545 | orchestrator | + [[ 1 -ge 0 ]] 2026-03-26 04:21:06.183624 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-03-26 04:21:08.297259 | orchestrator | 2026-03-26 04:21:08 | INFO  | Trying to run play pull-images in environment custom 2026-03-26 04:21:18.553472 | orchestrator | 2026-03-26 04:21:18 | INFO  | Task 411ad312-6a16-41aa-83a0-93c728c3fae0 (pull-images) was prepared for execution. 2026-03-26 04:21:18.553582 | orchestrator | 2026-03-26 04:21:18 | INFO  | Task 411ad312-6a16-41aa-83a0-93c728c3fae0 is running in background. No more output. Check ARA for logs. 2026-03-26 04:21:18.894236 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/500-kubernetes.sh 2026-03-26 04:21:18.901620 | orchestrator | + set -e 2026-03-26 04:21:18.901666 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-26 04:21:18.901679 | orchestrator | ++ export INTERACTIVE=false 2026-03-26 04:21:18.901691 | orchestrator | ++ INTERACTIVE=false 2026-03-26 04:21:18.901702 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-26 04:21:18.901712 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-26 04:21:18.901723 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-03-26 04:21:18.903399 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-03-26 04:21:18.914488 | orchestrator | ++ export MANAGER_VERSION=10.0.0-rc.1 2026-03-26 04:21:18.914527 | orchestrator | ++ MANAGER_VERSION=10.0.0-rc.1 2026-03-26 04:21:18.915332 | orchestrator | ++ semver 10.0.0-rc.1 8.0.3 2026-03-26 04:21:18.970831 | orchestrator | + [[ 1 -ge 0 ]] 2026-03-26 04:21:18.970922 | orchestrator | + osism apply frr 2026-03-26 04:21:31.087216 | orchestrator | 2026-03-26 04:21:31 | INFO  | Task 0d394b63-d017-4309-be34-0efdf0caac8b (frr) was prepared for execution. 2026-03-26 04:21:31.087377 | orchestrator | 2026-03-26 04:21:31 | INFO  | It takes a moment until task 0d394b63-d017-4309-be34-0efdf0caac8b (frr) has been started and output is visible here. 2026-03-26 04:22:02.722981 | orchestrator | 2026-03-26 04:22:02.724024 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-03-26 04:22:02.724075 | orchestrator | 2026-03-26 04:22:02.724097 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-03-26 04:22:02.724118 | orchestrator | Thursday 26 March 2026 04:21:38 +0000 (0:00:02.965) 0:00:02.965 ******** 2026-03-26 04:22:02.724139 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-03-26 04:22:02.724161 | orchestrator | 2026-03-26 04:22:02.724182 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-03-26 04:22:02.724202 | orchestrator | Thursday 26 March 2026 04:21:40 +0000 (0:00:02.108) 0:00:05.074 ******** 2026-03-26 04:22:02.724222 | orchestrator | ok: [testbed-manager] 2026-03-26 04:22:02.724243 | orchestrator | 2026-03-26 04:22:02.724263 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-03-26 04:22:02.724283 | orchestrator | Thursday 26 March 2026 04:21:42 +0000 (0:00:02.058) 0:00:07.133 ******** 2026-03-26 04:22:02.724303 | orchestrator | ok: [testbed-manager] 2026-03-26 04:22:02.724323 | orchestrator | 2026-03-26 04:22:02.724372 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-03-26 04:22:02.724390 | orchestrator | Thursday 26 March 2026 04:21:45 +0000 (0:00:02.810) 0:00:09.943 ******** 2026-03-26 04:22:02.724410 | orchestrator | ok: [testbed-manager] 2026-03-26 04:22:02.724428 | orchestrator | 2026-03-26 04:22:02.724446 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-03-26 04:22:02.724463 | orchestrator | Thursday 26 March 2026 04:21:47 +0000 (0:00:01.954) 0:00:11.898 ******** 2026-03-26 04:22:02.724481 | orchestrator | ok: [testbed-manager] 2026-03-26 04:22:02.724498 | orchestrator | 2026-03-26 04:22:02.724516 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-03-26 04:22:02.724534 | orchestrator | Thursday 26 March 2026 04:21:49 +0000 (0:00:01.980) 0:00:13.878 ******** 2026-03-26 04:22:02.724552 | orchestrator | ok: [testbed-manager] 2026-03-26 04:22:02.724569 | orchestrator | 2026-03-26 04:22:02.724587 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-03-26 04:22:02.724606 | orchestrator | Thursday 26 March 2026 04:21:51 +0000 (0:00:02.413) 0:00:16.291 ******** 2026-03-26 04:22:02.724623 | orchestrator | skipping: [testbed-manager] 2026-03-26 04:22:02.724680 | orchestrator | 2026-03-26 04:22:02.724699 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-03-26 04:22:02.724717 | orchestrator | Thursday 26 March 2026 04:21:53 +0000 (0:00:01.167) 0:00:17.459 ******** 2026-03-26 04:22:02.724736 | orchestrator | skipping: [testbed-manager] 2026-03-26 04:22:02.724754 | orchestrator | 2026-03-26 04:22:02.724772 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-03-26 04:22:02.724790 | orchestrator | Thursday 26 March 2026 04:21:54 +0000 (0:00:01.140) 0:00:18.600 ******** 2026-03-26 04:22:02.724809 | orchestrator | ok: [testbed-manager] 2026-03-26 04:22:02.724827 | orchestrator | 2026-03-26 04:22:02.724846 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-03-26 04:22:02.724864 | orchestrator | Thursday 26 March 2026 04:21:56 +0000 (0:00:01.916) 0:00:20.516 ******** 2026-03-26 04:22:02.724881 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-03-26 04:22:02.724898 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-03-26 04:22:02.724916 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-03-26 04:22:02.724953 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-03-26 04:22:02.724970 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-03-26 04:22:02.724988 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-03-26 04:22:02.725004 | orchestrator | 2026-03-26 04:22:02.725020 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-03-26 04:22:02.725101 | orchestrator | Thursday 26 March 2026 04:21:59 +0000 (0:00:03.629) 0:00:24.146 ******** 2026-03-26 04:22:02.725120 | orchestrator | ok: [testbed-manager] 2026-03-26 04:22:02.725136 | orchestrator | 2026-03-26 04:22:02.725152 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-26 04:22:02.725167 | orchestrator | testbed-manager : ok=9  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-26 04:22:02.725183 | orchestrator | 2026-03-26 04:22:02.725199 | orchestrator | 2026-03-26 04:22:02.725215 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-26 04:22:02.725230 | orchestrator | Thursday 26 March 2026 04:22:02 +0000 (0:00:02.615) 0:00:26.761 ******** 2026-03-26 04:22:02.725244 | orchestrator | =============================================================================== 2026-03-26 04:22:02.725260 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 3.63s 2026-03-26 04:22:02.725276 | orchestrator | osism.services.frr : Install frr package -------------------------------- 2.81s 2026-03-26 04:22:02.725293 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 2.62s 2026-03-26 04:22:02.725419 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 2.41s 2026-03-26 04:22:02.725442 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 2.11s 2026-03-26 04:22:02.725453 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 2.06s 2026-03-26 04:22:02.725462 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 1.98s 2026-03-26 04:22:02.725472 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.95s 2026-03-26 04:22:02.725512 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 1.92s 2026-03-26 04:22:02.725530 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 1.17s 2026-03-26 04:22:02.725546 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 1.14s 2026-03-26 04:22:03.044781 | orchestrator | + osism apply kubernetes 2026-03-26 04:22:05.199226 | orchestrator | 2026-03-26 04:22:05 | INFO  | Task fe2e5425-917f-42a5-93e8-ab4ed1c92939 (kubernetes) was prepared for execution. 2026-03-26 04:22:05.199415 | orchestrator | 2026-03-26 04:22:05 | INFO  | It takes a moment until task fe2e5425-917f-42a5-93e8-ab4ed1c92939 (kubernetes) has been started and output is visible here. 2026-03-26 04:22:50.339311 | orchestrator | 2026-03-26 04:22:50.339427 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-03-26 04:22:50.339443 | orchestrator | 2026-03-26 04:22:50.339455 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-03-26 04:22:50.339467 | orchestrator | Thursday 26 March 2026 04:22:11 +0000 (0:00:01.796) 0:00:01.796 ******** 2026-03-26 04:22:50.339478 | orchestrator | ok: [testbed-node-3] 2026-03-26 04:22:50.339490 | orchestrator | ok: [testbed-node-4] 2026-03-26 04:22:50.339501 | orchestrator | ok: [testbed-node-5] 2026-03-26 04:22:50.339511 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:22:50.339522 | orchestrator | ok: [testbed-node-1] 2026-03-26 04:22:50.339533 | orchestrator | ok: [testbed-node-2] 2026-03-26 04:22:50.339543 | orchestrator | 2026-03-26 04:22:50.339554 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-03-26 04:22:50.339565 | orchestrator | Thursday 26 March 2026 04:22:16 +0000 (0:00:05.407) 0:00:07.204 ******** 2026-03-26 04:22:50.339582 | orchestrator | skipping: [testbed-node-3] 2026-03-26 04:22:50.339600 | orchestrator | skipping: [testbed-node-4] 2026-03-26 04:22:50.339611 | orchestrator | skipping: [testbed-node-5] 2026-03-26 04:22:50.339621 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:22:50.339633 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:22:50.339652 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:22:50.339664 | orchestrator | 2026-03-26 04:22:50.339674 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-03-26 04:22:50.339685 | orchestrator | Thursday 26 March 2026 04:22:18 +0000 (0:00:01.878) 0:00:09.082 ******** 2026-03-26 04:22:50.339696 | orchestrator | skipping: [testbed-node-3] 2026-03-26 04:22:50.339706 | orchestrator | skipping: [testbed-node-4] 2026-03-26 04:22:50.339717 | orchestrator | skipping: [testbed-node-5] 2026-03-26 04:22:50.339727 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:22:50.339738 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:22:50.339749 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:22:50.339761 | orchestrator | 2026-03-26 04:22:50.339774 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-03-26 04:22:50.339786 | orchestrator | Thursday 26 March 2026 04:22:20 +0000 (0:00:01.950) 0:00:11.033 ******** 2026-03-26 04:22:50.339798 | orchestrator | ok: [testbed-node-4] 2026-03-26 04:22:50.339810 | orchestrator | ok: [testbed-node-3] 2026-03-26 04:22:50.339830 | orchestrator | ok: [testbed-node-5] 2026-03-26 04:22:50.339843 | orchestrator | ok: [testbed-node-1] 2026-03-26 04:22:50.339855 | orchestrator | ok: [testbed-node-2] 2026-03-26 04:22:50.339867 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:22:50.339878 | orchestrator | 2026-03-26 04:22:50.339890 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-03-26 04:22:50.339949 | orchestrator | Thursday 26 March 2026 04:22:23 +0000 (0:00:03.154) 0:00:14.187 ******** 2026-03-26 04:22:50.339962 | orchestrator | ok: [testbed-node-3] 2026-03-26 04:22:50.339979 | orchestrator | ok: [testbed-node-4] 2026-03-26 04:22:50.339995 | orchestrator | ok: [testbed-node-5] 2026-03-26 04:22:50.340007 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:22:50.340018 | orchestrator | ok: [testbed-node-1] 2026-03-26 04:22:50.340029 | orchestrator | ok: [testbed-node-2] 2026-03-26 04:22:50.340039 | orchestrator | 2026-03-26 04:22:50.340050 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-03-26 04:22:50.340061 | orchestrator | Thursday 26 March 2026 04:22:26 +0000 (0:00:02.308) 0:00:16.496 ******** 2026-03-26 04:22:50.340071 | orchestrator | ok: [testbed-node-3] 2026-03-26 04:22:50.340082 | orchestrator | ok: [testbed-node-4] 2026-03-26 04:22:50.340092 | orchestrator | ok: [testbed-node-5] 2026-03-26 04:22:50.340103 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:22:50.340113 | orchestrator | ok: [testbed-node-1] 2026-03-26 04:22:50.340146 | orchestrator | ok: [testbed-node-2] 2026-03-26 04:22:50.340157 | orchestrator | 2026-03-26 04:22:50.340168 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-03-26 04:22:50.340178 | orchestrator | Thursday 26 March 2026 04:22:28 +0000 (0:00:02.093) 0:00:18.590 ******** 2026-03-26 04:22:50.340189 | orchestrator | skipping: [testbed-node-3] 2026-03-26 04:22:50.340200 | orchestrator | skipping: [testbed-node-4] 2026-03-26 04:22:50.340239 | orchestrator | skipping: [testbed-node-5] 2026-03-26 04:22:50.340256 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:22:50.340267 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:22:50.340277 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:22:50.340288 | orchestrator | 2026-03-26 04:22:50.340299 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-03-26 04:22:50.340309 | orchestrator | Thursday 26 March 2026 04:22:30 +0000 (0:00:02.232) 0:00:20.822 ******** 2026-03-26 04:22:50.340320 | orchestrator | skipping: [testbed-node-3] 2026-03-26 04:22:50.340330 | orchestrator | skipping: [testbed-node-4] 2026-03-26 04:22:50.340340 | orchestrator | skipping: [testbed-node-5] 2026-03-26 04:22:50.340351 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:22:50.340361 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:22:50.340371 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:22:50.340382 | orchestrator | 2026-03-26 04:22:50.340392 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-03-26 04:22:50.340414 | orchestrator | Thursday 26 March 2026 04:22:32 +0000 (0:00:01.722) 0:00:22.544 ******** 2026-03-26 04:22:50.340426 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-26 04:22:50.340437 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-26 04:22:50.340447 | orchestrator | skipping: [testbed-node-3] 2026-03-26 04:22:50.340458 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-26 04:22:50.340468 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-26 04:22:50.340479 | orchestrator | skipping: [testbed-node-4] 2026-03-26 04:22:50.340489 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-26 04:22:50.340500 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-26 04:22:50.340510 | orchestrator | skipping: [testbed-node-5] 2026-03-26 04:22:50.340520 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-26 04:22:50.340531 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-26 04:22:50.340542 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:22:50.340573 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-26 04:22:50.340584 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-26 04:22:50.340595 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:22:50.340606 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-26 04:22:50.340616 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-26 04:22:50.340635 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:22:50.340646 | orchestrator | 2026-03-26 04:22:50.340657 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-03-26 04:22:50.340667 | orchestrator | Thursday 26 March 2026 04:22:34 +0000 (0:00:01.977) 0:00:24.522 ******** 2026-03-26 04:22:50.340678 | orchestrator | skipping: [testbed-node-3] 2026-03-26 04:22:50.340688 | orchestrator | skipping: [testbed-node-4] 2026-03-26 04:22:50.340699 | orchestrator | skipping: [testbed-node-5] 2026-03-26 04:22:50.340709 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:22:50.340720 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:22:50.340730 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:22:50.340740 | orchestrator | 2026-03-26 04:22:50.340760 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-03-26 04:22:50.340771 | orchestrator | Thursday 26 March 2026 04:22:36 +0000 (0:00:02.055) 0:00:26.577 ******** 2026-03-26 04:22:50.340782 | orchestrator | ok: [testbed-node-3] 2026-03-26 04:22:50.340792 | orchestrator | ok: [testbed-node-4] 2026-03-26 04:22:50.340803 | orchestrator | ok: [testbed-node-5] 2026-03-26 04:22:50.340813 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:22:50.340824 | orchestrator | ok: [testbed-node-1] 2026-03-26 04:22:50.340834 | orchestrator | ok: [testbed-node-2] 2026-03-26 04:22:50.340845 | orchestrator | 2026-03-26 04:22:50.340856 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-03-26 04:22:50.340866 | orchestrator | Thursday 26 March 2026 04:22:39 +0000 (0:00:03.010) 0:00:29.588 ******** 2026-03-26 04:22:50.340877 | orchestrator | ok: [testbed-node-5] 2026-03-26 04:22:50.340887 | orchestrator | ok: [testbed-node-4] 2026-03-26 04:22:50.340898 | orchestrator | ok: [testbed-node-3] 2026-03-26 04:22:50.340908 | orchestrator | ok: [testbed-node-1] 2026-03-26 04:22:50.340919 | orchestrator | ok: [testbed-node-2] 2026-03-26 04:22:50.340929 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:22:50.340939 | orchestrator | 2026-03-26 04:22:50.340950 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-03-26 04:22:50.340960 | orchestrator | Thursday 26 March 2026 04:22:41 +0000 (0:00:02.691) 0:00:32.280 ******** 2026-03-26 04:22:50.340971 | orchestrator | skipping: [testbed-node-3] 2026-03-26 04:22:50.340982 | orchestrator | skipping: [testbed-node-4] 2026-03-26 04:22:50.340992 | orchestrator | skipping: [testbed-node-5] 2026-03-26 04:22:50.341003 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:22:50.341013 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:22:50.341024 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:22:50.341034 | orchestrator | 2026-03-26 04:22:50.341045 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-03-26 04:22:50.341055 | orchestrator | Thursday 26 March 2026 04:22:43 +0000 (0:00:01.934) 0:00:34.214 ******** 2026-03-26 04:22:50.341066 | orchestrator | skipping: [testbed-node-3] 2026-03-26 04:22:50.341076 | orchestrator | skipping: [testbed-node-4] 2026-03-26 04:22:50.341087 | orchestrator | skipping: [testbed-node-5] 2026-03-26 04:22:50.341097 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:22:50.341108 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:22:50.341118 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:22:50.341128 | orchestrator | 2026-03-26 04:22:50.341139 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-03-26 04:22:50.341151 | orchestrator | Thursday 26 March 2026 04:22:46 +0000 (0:00:02.286) 0:00:36.500 ******** 2026-03-26 04:22:50.341161 | orchestrator | skipping: [testbed-node-3] 2026-03-26 04:22:50.341172 | orchestrator | skipping: [testbed-node-4] 2026-03-26 04:22:50.341182 | orchestrator | skipping: [testbed-node-5] 2026-03-26 04:22:50.341193 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:22:50.341207 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:22:50.341248 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:22:50.341266 | orchestrator | 2026-03-26 04:22:50.341290 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-03-26 04:22:50.341309 | orchestrator | Thursday 26 March 2026 04:22:47 +0000 (0:00:01.698) 0:00:38.199 ******** 2026-03-26 04:22:50.341325 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-03-26 04:22:50.341344 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-03-26 04:22:50.341361 | orchestrator | skipping: [testbed-node-3] 2026-03-26 04:22:50.341379 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-03-26 04:22:50.341395 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-03-26 04:22:50.341414 | orchestrator | skipping: [testbed-node-4] 2026-03-26 04:22:50.341431 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-03-26 04:22:50.341450 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-03-26 04:22:50.341481 | orchestrator | skipping: [testbed-node-5] 2026-03-26 04:22:50.341499 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-03-26 04:22:50.341518 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-03-26 04:22:50.341536 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:22:50.341554 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-03-26 04:22:50.341573 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-03-26 04:22:50.341592 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:22:50.341610 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-03-26 04:22:50.341628 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-03-26 04:22:50.341647 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:22:50.341664 | orchestrator | 2026-03-26 04:22:50.341681 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-03-26 04:22:50.341697 | orchestrator | Thursday 26 March 2026 04:22:49 +0000 (0:00:02.052) 0:00:40.251 ******** 2026-03-26 04:22:50.341713 | orchestrator | skipping: [testbed-node-3] 2026-03-26 04:22:50.341730 | orchestrator | skipping: [testbed-node-4] 2026-03-26 04:22:50.341760 | orchestrator | skipping: [testbed-node-5] 2026-03-26 04:24:28.564295 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:24:28.564411 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:24:28.564427 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:24:28.564438 | orchestrator | 2026-03-26 04:24:28.564451 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-03-26 04:24:28.564463 | orchestrator | Thursday 26 March 2026 04:22:51 +0000 (0:00:01.867) 0:00:42.119 ******** 2026-03-26 04:24:28.564473 | orchestrator | skipping: [testbed-node-3] 2026-03-26 04:24:28.564484 | orchestrator | skipping: [testbed-node-4] 2026-03-26 04:24:28.564495 | orchestrator | skipping: [testbed-node-5] 2026-03-26 04:24:28.564506 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:24:28.564516 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:24:28.564526 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:24:28.564537 | orchestrator | 2026-03-26 04:24:28.564548 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-03-26 04:24:28.564558 | orchestrator | 2026-03-26 04:24:28.564569 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-03-26 04:24:28.564580 | orchestrator | Thursday 26 March 2026 04:22:54 +0000 (0:00:02.859) 0:00:44.979 ******** 2026-03-26 04:24:28.564591 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:24:28.564602 | orchestrator | ok: [testbed-node-1] 2026-03-26 04:24:28.564613 | orchestrator | ok: [testbed-node-2] 2026-03-26 04:24:28.564623 | orchestrator | 2026-03-26 04:24:28.564634 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-03-26 04:24:28.564664 | orchestrator | Thursday 26 March 2026 04:22:56 +0000 (0:00:01.767) 0:00:46.747 ******** 2026-03-26 04:24:28.564676 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:24:28.564686 | orchestrator | ok: [testbed-node-1] 2026-03-26 04:24:28.564697 | orchestrator | ok: [testbed-node-2] 2026-03-26 04:24:28.564707 | orchestrator | 2026-03-26 04:24:28.564718 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-03-26 04:24:28.564729 | orchestrator | Thursday 26 March 2026 04:22:58 +0000 (0:00:02.080) 0:00:48.828 ******** 2026-03-26 04:24:28.564739 | orchestrator | changed: [testbed-node-0] 2026-03-26 04:24:28.564750 | orchestrator | changed: [testbed-node-1] 2026-03-26 04:24:28.564760 | orchestrator | changed: [testbed-node-2] 2026-03-26 04:24:28.564771 | orchestrator | 2026-03-26 04:24:28.564823 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-03-26 04:24:28.564837 | orchestrator | Thursday 26 March 2026 04:23:00 +0000 (0:00:02.134) 0:00:50.962 ******** 2026-03-26 04:24:28.564849 | orchestrator | ok: [testbed-node-1] 2026-03-26 04:24:28.564861 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:24:28.564873 | orchestrator | ok: [testbed-node-2] 2026-03-26 04:24:28.564884 | orchestrator | 2026-03-26 04:24:28.564921 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-03-26 04:24:28.564934 | orchestrator | Thursday 26 March 2026 04:23:02 +0000 (0:00:01.964) 0:00:52.926 ******** 2026-03-26 04:24:28.564946 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:24:28.564959 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:24:28.564970 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:24:28.564982 | orchestrator | 2026-03-26 04:24:28.564995 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-03-26 04:24:28.565005 | orchestrator | Thursday 26 March 2026 04:23:03 +0000 (0:00:01.363) 0:00:54.290 ******** 2026-03-26 04:24:28.565016 | orchestrator | ok: [testbed-node-1] 2026-03-26 04:24:28.565027 | orchestrator | ok: [testbed-node-2] 2026-03-26 04:24:28.565037 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:24:28.565048 | orchestrator | 2026-03-26 04:24:28.565058 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-03-26 04:24:28.565069 | orchestrator | Thursday 26 March 2026 04:23:05 +0000 (0:00:01.705) 0:00:55.995 ******** 2026-03-26 04:24:28.565080 | orchestrator | ok: [testbed-node-1] 2026-03-26 04:24:28.565091 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:24:28.565101 | orchestrator | ok: [testbed-node-2] 2026-03-26 04:24:28.565112 | orchestrator | 2026-03-26 04:24:28.565122 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-03-26 04:24:28.565133 | orchestrator | Thursday 26 March 2026 04:23:07 +0000 (0:00:02.200) 0:00:58.196 ******** 2026-03-26 04:24:28.565144 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 04:24:28.565154 | orchestrator | 2026-03-26 04:24:28.565165 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-03-26 04:24:28.565176 | orchestrator | Thursday 26 March 2026 04:23:09 +0000 (0:00:01.908) 0:01:00.104 ******** 2026-03-26 04:24:28.565186 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:24:28.565197 | orchestrator | ok: [testbed-node-1] 2026-03-26 04:24:28.565207 | orchestrator | ok: [testbed-node-2] 2026-03-26 04:24:28.565218 | orchestrator | 2026-03-26 04:24:28.565228 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-03-26 04:24:28.565239 | orchestrator | Thursday 26 March 2026 04:23:12 +0000 (0:00:02.526) 0:01:02.631 ******** 2026-03-26 04:24:28.565250 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:24:28.565260 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:24:28.565271 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:24:28.565281 | orchestrator | 2026-03-26 04:24:28.565292 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-03-26 04:24:28.565303 | orchestrator | Thursday 26 March 2026 04:23:13 +0000 (0:00:01.669) 0:01:04.300 ******** 2026-03-26 04:24:28.565313 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:24:28.565324 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:24:28.565334 | orchestrator | changed: [testbed-node-0] 2026-03-26 04:24:28.565345 | orchestrator | 2026-03-26 04:24:28.565356 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-03-26 04:24:28.565366 | orchestrator | Thursday 26 March 2026 04:23:15 +0000 (0:00:01.858) 0:01:06.159 ******** 2026-03-26 04:24:28.565377 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:24:28.565387 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:24:28.565398 | orchestrator | changed: [testbed-node-0] 2026-03-26 04:24:28.565408 | orchestrator | 2026-03-26 04:24:28.565419 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-03-26 04:24:28.565429 | orchestrator | Thursday 26 March 2026 04:23:18 +0000 (0:00:02.518) 0:01:08.677 ******** 2026-03-26 04:24:28.565440 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:24:28.565451 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:24:28.565478 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:24:28.565489 | orchestrator | 2026-03-26 04:24:28.565500 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-03-26 04:24:28.565511 | orchestrator | Thursday 26 March 2026 04:23:19 +0000 (0:00:01.368) 0:01:10.046 ******** 2026-03-26 04:24:28.565529 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:24:28.565540 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:24:28.565550 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:24:28.565561 | orchestrator | 2026-03-26 04:24:28.565572 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-03-26 04:24:28.565582 | orchestrator | Thursday 26 March 2026 04:23:21 +0000 (0:00:01.699) 0:01:11.745 ******** 2026-03-26 04:24:28.565593 | orchestrator | changed: [testbed-node-0] 2026-03-26 04:24:28.565603 | orchestrator | changed: [testbed-node-1] 2026-03-26 04:24:28.565614 | orchestrator | changed: [testbed-node-2] 2026-03-26 04:24:28.565624 | orchestrator | 2026-03-26 04:24:28.565635 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-03-26 04:24:28.565645 | orchestrator | Thursday 26 March 2026 04:23:23 +0000 (0:00:02.229) 0:01:13.975 ******** 2026-03-26 04:24:28.565656 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:24:28.565666 | orchestrator | ok: [testbed-node-1] 2026-03-26 04:24:28.565677 | orchestrator | ok: [testbed-node-2] 2026-03-26 04:24:28.565687 | orchestrator | 2026-03-26 04:24:28.565698 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-03-26 04:24:28.565709 | orchestrator | Thursday 26 March 2026 04:23:25 +0000 (0:00:01.820) 0:01:15.796 ******** 2026-03-26 04:24:28.565719 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:24:28.565730 | orchestrator | ok: [testbed-node-1] 2026-03-26 04:24:28.565740 | orchestrator | ok: [testbed-node-2] 2026-03-26 04:24:28.565751 | orchestrator | 2026-03-26 04:24:28.565762 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-03-26 04:24:28.565791 | orchestrator | Thursday 26 March 2026 04:23:26 +0000 (0:00:01.411) 0:01:17.208 ******** 2026-03-26 04:24:28.565803 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-26 04:24:28.565815 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-26 04:24:28.565826 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-26 04:24:28.565837 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-26 04:24:28.565847 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-26 04:24:28.565858 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-26 04:24:28.565868 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:24:28.565879 | orchestrator | ok: [testbed-node-1] 2026-03-26 04:24:28.565889 | orchestrator | ok: [testbed-node-2] 2026-03-26 04:24:28.565900 | orchestrator | 2026-03-26 04:24:28.565911 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-03-26 04:24:28.565921 | orchestrator | Thursday 26 March 2026 04:23:50 +0000 (0:00:23.365) 0:01:40.573 ******** 2026-03-26 04:24:28.565932 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:24:28.565942 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:24:28.565953 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:24:28.565963 | orchestrator | 2026-03-26 04:24:28.565974 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-03-26 04:24:28.565984 | orchestrator | Thursday 26 March 2026 04:23:51 +0000 (0:00:01.362) 0:01:41.936 ******** 2026-03-26 04:24:28.565995 | orchestrator | changed: [testbed-node-0] 2026-03-26 04:24:28.566005 | orchestrator | changed: [testbed-node-1] 2026-03-26 04:24:28.566077 | orchestrator | changed: [testbed-node-2] 2026-03-26 04:24:28.566092 | orchestrator | 2026-03-26 04:24:28.566103 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-03-26 04:24:28.566121 | orchestrator | Thursday 26 March 2026 04:23:53 +0000 (0:00:02.075) 0:01:44.011 ******** 2026-03-26 04:24:28.566132 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:24:28.566142 | orchestrator | ok: [testbed-node-1] 2026-03-26 04:24:28.566162 | orchestrator | ok: [testbed-node-2] 2026-03-26 04:24:28.566173 | orchestrator | 2026-03-26 04:24:28.566184 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-03-26 04:24:28.566194 | orchestrator | Thursday 26 March 2026 04:23:55 +0000 (0:00:02.314) 0:01:46.326 ******** 2026-03-26 04:24:28.566205 | orchestrator | changed: [testbed-node-0] 2026-03-26 04:24:28.566215 | orchestrator | changed: [testbed-node-2] 2026-03-26 04:24:28.566226 | orchestrator | changed: [testbed-node-1] 2026-03-26 04:24:28.566236 | orchestrator | 2026-03-26 04:24:28.566247 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-03-26 04:24:28.566257 | orchestrator | Thursday 26 March 2026 04:24:23 +0000 (0:00:27.061) 0:02:13.387 ******** 2026-03-26 04:24:28.566268 | orchestrator | ok: [testbed-node-1] 2026-03-26 04:24:28.566278 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:24:28.566289 | orchestrator | ok: [testbed-node-2] 2026-03-26 04:24:28.566299 | orchestrator | 2026-03-26 04:24:28.566310 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-03-26 04:24:28.566327 | orchestrator | Thursday 26 March 2026 04:24:24 +0000 (0:00:01.800) 0:02:15.187 ******** 2026-03-26 04:24:28.566338 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:24:28.566349 | orchestrator | ok: [testbed-node-1] 2026-03-26 04:24:28.566359 | orchestrator | ok: [testbed-node-2] 2026-03-26 04:24:28.566370 | orchestrator | 2026-03-26 04:24:28.566380 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-03-26 04:24:28.566391 | orchestrator | Thursday 26 March 2026 04:24:26 +0000 (0:00:01.708) 0:02:16.895 ******** 2026-03-26 04:24:28.566401 | orchestrator | changed: [testbed-node-0] 2026-03-26 04:24:28.566412 | orchestrator | changed: [testbed-node-1] 2026-03-26 04:24:28.566422 | orchestrator | changed: [testbed-node-2] 2026-03-26 04:24:28.566433 | orchestrator | 2026-03-26 04:24:28.566452 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-03-26 04:25:16.332105 | orchestrator | Thursday 26 March 2026 04:24:28 +0000 (0:00:01.997) 0:02:18.893 ******** 2026-03-26 04:25:16.332289 | orchestrator | ok: [testbed-node-1] 2026-03-26 04:25:16.332321 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:25:16.332340 | orchestrator | ok: [testbed-node-2] 2026-03-26 04:25:16.332358 | orchestrator | 2026-03-26 04:25:16.332378 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-03-26 04:25:16.332397 | orchestrator | Thursday 26 March 2026 04:24:30 +0000 (0:00:01.744) 0:02:20.638 ******** 2026-03-26 04:25:16.332415 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:25:16.332434 | orchestrator | ok: [testbed-node-1] 2026-03-26 04:25:16.332449 | orchestrator | ok: [testbed-node-2] 2026-03-26 04:25:16.332460 | orchestrator | 2026-03-26 04:25:16.332471 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-03-26 04:25:16.332482 | orchestrator | Thursday 26 March 2026 04:24:31 +0000 (0:00:01.384) 0:02:22.023 ******** 2026-03-26 04:25:16.332493 | orchestrator | changed: [testbed-node-0] 2026-03-26 04:25:16.332505 | orchestrator | changed: [testbed-node-1] 2026-03-26 04:25:16.332516 | orchestrator | changed: [testbed-node-2] 2026-03-26 04:25:16.332526 | orchestrator | 2026-03-26 04:25:16.332537 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-03-26 04:25:16.332547 | orchestrator | Thursday 26 March 2026 04:24:33 +0000 (0:00:01.654) 0:02:23.677 ******** 2026-03-26 04:25:16.332559 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:25:16.332570 | orchestrator | ok: [testbed-node-1] 2026-03-26 04:25:16.332620 | orchestrator | ok: [testbed-node-2] 2026-03-26 04:25:16.332634 | orchestrator | 2026-03-26 04:25:16.332647 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-03-26 04:25:16.332660 | orchestrator | Thursday 26 March 2026 04:24:35 +0000 (0:00:01.978) 0:02:25.655 ******** 2026-03-26 04:25:16.332672 | orchestrator | changed: [testbed-node-0] 2026-03-26 04:25:16.332708 | orchestrator | changed: [testbed-node-1] 2026-03-26 04:25:16.332720 | orchestrator | changed: [testbed-node-2] 2026-03-26 04:25:16.332732 | orchestrator | 2026-03-26 04:25:16.332744 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-03-26 04:25:16.332768 | orchestrator | Thursday 26 March 2026 04:24:37 +0000 (0:00:01.865) 0:02:27.520 ******** 2026-03-26 04:25:16.332780 | orchestrator | changed: [testbed-node-0] 2026-03-26 04:25:16.332792 | orchestrator | changed: [testbed-node-1] 2026-03-26 04:25:16.332804 | orchestrator | changed: [testbed-node-2] 2026-03-26 04:25:16.332816 | orchestrator | 2026-03-26 04:25:16.332828 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-03-26 04:25:16.332840 | orchestrator | Thursday 26 March 2026 04:24:39 +0000 (0:00:02.004) 0:02:29.525 ******** 2026-03-26 04:25:16.332852 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:25:16.332865 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:25:16.332877 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:25:16.332888 | orchestrator | 2026-03-26 04:25:16.332900 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-03-26 04:25:16.332912 | orchestrator | Thursday 26 March 2026 04:24:40 +0000 (0:00:01.370) 0:02:30.896 ******** 2026-03-26 04:25:16.332924 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:25:16.332936 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:25:16.332947 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:25:16.332957 | orchestrator | 2026-03-26 04:25:16.332968 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-03-26 04:25:16.332979 | orchestrator | Thursday 26 March 2026 04:24:41 +0000 (0:00:01.317) 0:02:32.214 ******** 2026-03-26 04:25:16.332989 | orchestrator | ok: [testbed-node-1] 2026-03-26 04:25:16.333000 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:25:16.333012 | orchestrator | ok: [testbed-node-2] 2026-03-26 04:25:16.333022 | orchestrator | 2026-03-26 04:25:16.333033 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-03-26 04:25:16.333044 | orchestrator | Thursday 26 March 2026 04:24:43 +0000 (0:00:01.658) 0:02:33.872 ******** 2026-03-26 04:25:16.333054 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:25:16.333065 | orchestrator | ok: [testbed-node-1] 2026-03-26 04:25:16.333075 | orchestrator | ok: [testbed-node-2] 2026-03-26 04:25:16.333086 | orchestrator | 2026-03-26 04:25:16.333098 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-03-26 04:25:16.333110 | orchestrator | Thursday 26 March 2026 04:24:45 +0000 (0:00:01.650) 0:02:35.523 ******** 2026-03-26 04:25:16.333121 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-26 04:25:16.333133 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-26 04:25:16.333143 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-26 04:25:16.333154 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-26 04:25:16.333164 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-26 04:25:16.333175 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-26 04:25:16.333187 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-26 04:25:16.333197 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-26 04:25:16.333208 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-26 04:25:16.333219 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-03-26 04:25:16.333229 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-26 04:25:16.333247 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-26 04:25:16.333279 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-03-26 04:25:16.333291 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-26 04:25:16.333301 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-26 04:25:16.333312 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-26 04:25:16.333322 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-26 04:25:16.333333 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-26 04:25:16.333343 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-26 04:25:16.333353 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-26 04:25:16.333364 | orchestrator | 2026-03-26 04:25:16.333374 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-03-26 04:25:16.333385 | orchestrator | 2026-03-26 04:25:16.333395 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-03-26 04:25:16.333406 | orchestrator | Thursday 26 March 2026 04:24:49 +0000 (0:00:04.287) 0:02:39.810 ******** 2026-03-26 04:25:16.333417 | orchestrator | ok: [testbed-node-3] 2026-03-26 04:25:16.333427 | orchestrator | ok: [testbed-node-4] 2026-03-26 04:25:16.333438 | orchestrator | ok: [testbed-node-5] 2026-03-26 04:25:16.333448 | orchestrator | 2026-03-26 04:25:16.333459 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-03-26 04:25:16.333469 | orchestrator | Thursday 26 March 2026 04:24:50 +0000 (0:00:01.359) 0:02:41.170 ******** 2026-03-26 04:25:16.333480 | orchestrator | ok: [testbed-node-3] 2026-03-26 04:25:16.333490 | orchestrator | ok: [testbed-node-4] 2026-03-26 04:25:16.333501 | orchestrator | ok: [testbed-node-5] 2026-03-26 04:25:16.333511 | orchestrator | 2026-03-26 04:25:16.333522 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-03-26 04:25:16.333533 | orchestrator | Thursday 26 March 2026 04:24:52 +0000 (0:00:01.639) 0:02:42.809 ******** 2026-03-26 04:25:16.333543 | orchestrator | ok: [testbed-node-3] 2026-03-26 04:25:16.333554 | orchestrator | ok: [testbed-node-4] 2026-03-26 04:25:16.333564 | orchestrator | ok: [testbed-node-5] 2026-03-26 04:25:16.333574 | orchestrator | 2026-03-26 04:25:16.333637 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-03-26 04:25:16.333650 | orchestrator | Thursday 26 March 2026 04:24:54 +0000 (0:00:01.553) 0:02:44.363 ******** 2026-03-26 04:25:16.333661 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-26 04:25:16.333672 | orchestrator | 2026-03-26 04:25:16.333683 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-03-26 04:25:16.333694 | orchestrator | Thursday 26 March 2026 04:24:55 +0000 (0:00:01.704) 0:02:46.067 ******** 2026-03-26 04:25:16.333705 | orchestrator | skipping: [testbed-node-3] 2026-03-26 04:25:16.333716 | orchestrator | skipping: [testbed-node-4] 2026-03-26 04:25:16.333726 | orchestrator | skipping: [testbed-node-5] 2026-03-26 04:25:16.333737 | orchestrator | 2026-03-26 04:25:16.333747 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-03-26 04:25:16.333758 | orchestrator | Thursday 26 March 2026 04:24:57 +0000 (0:00:01.488) 0:02:47.556 ******** 2026-03-26 04:25:16.333769 | orchestrator | skipping: [testbed-node-3] 2026-03-26 04:25:16.333779 | orchestrator | skipping: [testbed-node-4] 2026-03-26 04:25:16.333790 | orchestrator | skipping: [testbed-node-5] 2026-03-26 04:25:16.333800 | orchestrator | 2026-03-26 04:25:16.333811 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-03-26 04:25:16.333821 | orchestrator | Thursday 26 March 2026 04:24:58 +0000 (0:00:01.376) 0:02:48.932 ******** 2026-03-26 04:25:16.333839 | orchestrator | skipping: [testbed-node-3] 2026-03-26 04:25:16.333850 | orchestrator | skipping: [testbed-node-4] 2026-03-26 04:25:16.333860 | orchestrator | skipping: [testbed-node-5] 2026-03-26 04:25:16.333871 | orchestrator | 2026-03-26 04:25:16.333882 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-03-26 04:25:16.333892 | orchestrator | Thursday 26 March 2026 04:25:00 +0000 (0:00:01.437) 0:02:50.370 ******** 2026-03-26 04:25:16.333903 | orchestrator | ok: [testbed-node-3] 2026-03-26 04:25:16.333913 | orchestrator | ok: [testbed-node-4] 2026-03-26 04:25:16.333924 | orchestrator | ok: [testbed-node-5] 2026-03-26 04:25:16.333934 | orchestrator | 2026-03-26 04:25:16.333945 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-03-26 04:25:16.333962 | orchestrator | Thursday 26 March 2026 04:25:01 +0000 (0:00:01.714) 0:02:52.085 ******** 2026-03-26 04:25:16.333972 | orchestrator | ok: [testbed-node-3] 2026-03-26 04:25:16.333982 | orchestrator | ok: [testbed-node-4] 2026-03-26 04:25:16.333991 | orchestrator | ok: [testbed-node-5] 2026-03-26 04:25:16.334000 | orchestrator | 2026-03-26 04:25:16.334010 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-03-26 04:25:16.334117 | orchestrator | Thursday 26 March 2026 04:25:03 +0000 (0:00:02.180) 0:02:54.265 ******** 2026-03-26 04:25:16.334129 | orchestrator | ok: [testbed-node-3] 2026-03-26 04:25:16.334138 | orchestrator | ok: [testbed-node-4] 2026-03-26 04:25:16.334147 | orchestrator | ok: [testbed-node-5] 2026-03-26 04:25:16.334157 | orchestrator | 2026-03-26 04:25:16.334166 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-03-26 04:25:16.334176 | orchestrator | Thursday 26 March 2026 04:25:06 +0000 (0:00:02.250) 0:02:56.516 ******** 2026-03-26 04:25:16.334185 | orchestrator | changed: [testbed-node-3] 2026-03-26 04:25:16.334195 | orchestrator | changed: [testbed-node-5] 2026-03-26 04:25:16.334204 | orchestrator | changed: [testbed-node-4] 2026-03-26 04:25:16.334214 | orchestrator | 2026-03-26 04:25:16.334223 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-03-26 04:25:16.334232 | orchestrator | 2026-03-26 04:25:16.334242 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-03-26 04:25:16.334251 | orchestrator | Thursday 26 March 2026 04:25:14 +0000 (0:00:07.927) 0:03:04.443 ******** 2026-03-26 04:25:16.334261 | orchestrator | ok: [testbed-manager] 2026-03-26 04:25:16.334270 | orchestrator | 2026-03-26 04:25:16.334279 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-03-26 04:25:16.334298 | orchestrator | Thursday 26 March 2026 04:25:16 +0000 (0:00:02.221) 0:03:06.664 ******** 2026-03-26 04:26:25.525514 | orchestrator | ok: [testbed-manager] 2026-03-26 04:26:25.525642 | orchestrator | 2026-03-26 04:26:25.525661 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-03-26 04:26:25.525675 | orchestrator | Thursday 26 March 2026 04:25:17 +0000 (0:00:01.434) 0:03:08.099 ******** 2026-03-26 04:26:25.525687 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-03-26 04:26:25.525697 | orchestrator | 2026-03-26 04:26:25.525708 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-03-26 04:26:25.525719 | orchestrator | Thursday 26 March 2026 04:25:19 +0000 (0:00:01.540) 0:03:09.640 ******** 2026-03-26 04:26:25.525730 | orchestrator | changed: [testbed-manager] 2026-03-26 04:26:25.525741 | orchestrator | 2026-03-26 04:26:25.525752 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-03-26 04:26:25.525762 | orchestrator | Thursday 26 March 2026 04:25:21 +0000 (0:00:01.925) 0:03:11.565 ******** 2026-03-26 04:26:25.525773 | orchestrator | changed: [testbed-manager] 2026-03-26 04:26:25.525784 | orchestrator | 2026-03-26 04:26:25.525794 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-03-26 04:26:25.525805 | orchestrator | Thursday 26 March 2026 04:25:22 +0000 (0:00:01.537) 0:03:13.103 ******** 2026-03-26 04:26:25.525815 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-26 04:26:25.525826 | orchestrator | 2026-03-26 04:26:25.525856 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-03-26 04:26:25.525867 | orchestrator | Thursday 26 March 2026 04:25:25 +0000 (0:00:02.937) 0:03:16.040 ******** 2026-03-26 04:26:25.525878 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-26 04:26:25.525888 | orchestrator | 2026-03-26 04:26:25.525899 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-03-26 04:26:25.525910 | orchestrator | Thursday 26 March 2026 04:25:27 +0000 (0:00:01.787) 0:03:17.828 ******** 2026-03-26 04:26:25.525936 | orchestrator | ok: [testbed-manager] 2026-03-26 04:26:25.525947 | orchestrator | 2026-03-26 04:26:25.525958 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-03-26 04:26:25.525971 | orchestrator | Thursday 26 March 2026 04:25:28 +0000 (0:00:01.448) 0:03:19.277 ******** 2026-03-26 04:26:25.525983 | orchestrator | ok: [testbed-manager] 2026-03-26 04:26:25.525994 | orchestrator | 2026-03-26 04:26:25.526006 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-03-26 04:26:25.526083 | orchestrator | 2026-03-26 04:26:25.526105 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-03-26 04:26:25.526125 | orchestrator | Thursday 26 March 2026 04:25:30 +0000 (0:00:01.867) 0:03:21.144 ******** 2026-03-26 04:26:25.526145 | orchestrator | ok: [testbed-manager] 2026-03-26 04:26:25.526164 | orchestrator | 2026-03-26 04:26:25.526184 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-03-26 04:26:25.526205 | orchestrator | Thursday 26 March 2026 04:25:31 +0000 (0:00:01.154) 0:03:22.299 ******** 2026-03-26 04:26:25.526226 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-03-26 04:26:25.526247 | orchestrator | 2026-03-26 04:26:25.526268 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-03-26 04:26:25.526284 | orchestrator | Thursday 26 March 2026 04:25:33 +0000 (0:00:01.486) 0:03:23.785 ******** 2026-03-26 04:26:25.526296 | orchestrator | ok: [testbed-manager] 2026-03-26 04:26:25.526308 | orchestrator | 2026-03-26 04:26:25.526320 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-03-26 04:26:25.526331 | orchestrator | Thursday 26 March 2026 04:25:35 +0000 (0:00:01.834) 0:03:25.620 ******** 2026-03-26 04:26:25.526374 | orchestrator | ok: [testbed-manager] 2026-03-26 04:26:25.526386 | orchestrator | 2026-03-26 04:26:25.526396 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-03-26 04:26:25.526407 | orchestrator | Thursday 26 March 2026 04:25:37 +0000 (0:00:02.573) 0:03:28.194 ******** 2026-03-26 04:26:25.526417 | orchestrator | ok: [testbed-manager] 2026-03-26 04:26:25.526428 | orchestrator | 2026-03-26 04:26:25.526438 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-03-26 04:26:25.526449 | orchestrator | Thursday 26 March 2026 04:25:39 +0000 (0:00:01.514) 0:03:29.709 ******** 2026-03-26 04:26:25.526459 | orchestrator | ok: [testbed-manager] 2026-03-26 04:26:25.526470 | orchestrator | 2026-03-26 04:26:25.526483 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-03-26 04:26:25.526502 | orchestrator | Thursday 26 March 2026 04:25:40 +0000 (0:00:01.433) 0:03:31.143 ******** 2026-03-26 04:26:25.526520 | orchestrator | ok: [testbed-manager] 2026-03-26 04:26:25.526537 | orchestrator | 2026-03-26 04:26:25.526555 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-03-26 04:26:25.526574 | orchestrator | Thursday 26 March 2026 04:25:42 +0000 (0:00:01.654) 0:03:32.798 ******** 2026-03-26 04:26:25.526592 | orchestrator | ok: [testbed-manager] 2026-03-26 04:26:25.526612 | orchestrator | 2026-03-26 04:26:25.526629 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-03-26 04:26:25.526647 | orchestrator | Thursday 26 March 2026 04:25:44 +0000 (0:00:02.418) 0:03:35.217 ******** 2026-03-26 04:26:25.526658 | orchestrator | ok: [testbed-manager] 2026-03-26 04:26:25.526669 | orchestrator | 2026-03-26 04:26:25.526680 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-03-26 04:26:25.526702 | orchestrator | 2026-03-26 04:26:25.526713 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-03-26 04:26:25.526723 | orchestrator | Thursday 26 March 2026 04:25:46 +0000 (0:00:01.700) 0:03:36.917 ******** 2026-03-26 04:26:25.526734 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:26:25.526744 | orchestrator | ok: [testbed-node-1] 2026-03-26 04:26:25.526755 | orchestrator | ok: [testbed-node-2] 2026-03-26 04:26:25.526765 | orchestrator | 2026-03-26 04:26:25.526776 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-03-26 04:26:25.526786 | orchestrator | Thursday 26 March 2026 04:25:47 +0000 (0:00:01.410) 0:03:38.328 ******** 2026-03-26 04:26:25.526797 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:26:25.526808 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:26:25.526818 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:26:25.526829 | orchestrator | 2026-03-26 04:26:25.526860 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-03-26 04:26:25.526872 | orchestrator | Thursday 26 March 2026 04:25:49 +0000 (0:00:01.715) 0:03:40.044 ******** 2026-03-26 04:26:25.526882 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 04:26:25.526893 | orchestrator | 2026-03-26 04:26:25.526904 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-03-26 04:26:25.526915 | orchestrator | Thursday 26 March 2026 04:25:51 +0000 (0:00:01.847) 0:03:41.892 ******** 2026-03-26 04:26:25.526925 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-26 04:26:25.526936 | orchestrator | 2026-03-26 04:26:25.526946 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-03-26 04:26:25.526956 | orchestrator | Thursday 26 March 2026 04:25:53 +0000 (0:00:01.905) 0:03:43.797 ******** 2026-03-26 04:26:25.526967 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-26 04:26:25.526978 | orchestrator | 2026-03-26 04:26:25.526988 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-03-26 04:26:25.526999 | orchestrator | Thursday 26 March 2026 04:25:55 +0000 (0:00:01.865) 0:03:45.663 ******** 2026-03-26 04:26:25.527009 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:26:25.527020 | orchestrator | 2026-03-26 04:26:25.527030 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-03-26 04:26:25.527041 | orchestrator | Thursday 26 March 2026 04:25:56 +0000 (0:00:01.151) 0:03:46.815 ******** 2026-03-26 04:26:25.527051 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-26 04:26:25.527062 | orchestrator | 2026-03-26 04:26:25.527072 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-03-26 04:26:25.527083 | orchestrator | Thursday 26 March 2026 04:25:58 +0000 (0:00:01.972) 0:03:48.788 ******** 2026-03-26 04:26:25.527093 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-26 04:26:25.527104 | orchestrator | 2026-03-26 04:26:25.527115 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-03-26 04:26:25.527126 | orchestrator | Thursday 26 March 2026 04:26:00 +0000 (0:00:02.341) 0:03:51.130 ******** 2026-03-26 04:26:25.527136 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-26 04:26:25.527147 | orchestrator | 2026-03-26 04:26:25.527157 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-03-26 04:26:25.527168 | orchestrator | Thursday 26 March 2026 04:26:01 +0000 (0:00:01.175) 0:03:52.306 ******** 2026-03-26 04:26:25.527178 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-26 04:26:25.527189 | orchestrator | 2026-03-26 04:26:25.527199 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-03-26 04:26:25.527210 | orchestrator | Thursday 26 March 2026 04:26:03 +0000 (0:00:01.170) 0:03:53.476 ******** 2026-03-26 04:26:25.527220 | orchestrator | ok: [testbed-node-0 -> localhost] => { 2026-03-26 04:26:25.527231 | orchestrator |  "msg": "Installed Cilium version: 1.18.2, Target Cilium version: v1.18.2, Update needed: False\n" 2026-03-26 04:26:25.527243 | orchestrator | } 2026-03-26 04:26:25.527254 | orchestrator | 2026-03-26 04:26:25.527271 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-03-26 04:26:25.527282 | orchestrator | Thursday 26 March 2026 04:26:04 +0000 (0:00:01.149) 0:03:54.626 ******** 2026-03-26 04:26:25.527292 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:26:25.527303 | orchestrator | 2026-03-26 04:26:25.527313 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-03-26 04:26:25.527324 | orchestrator | Thursday 26 March 2026 04:26:05 +0000 (0:00:01.155) 0:03:55.782 ******** 2026-03-26 04:26:25.527373 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-03-26 04:26:25.527387 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-03-26 04:26:25.527398 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-03-26 04:26:25.527408 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-03-26 04:26:25.527419 | orchestrator | 2026-03-26 04:26:25.527429 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-03-26 04:26:25.527440 | orchestrator | Thursday 26 March 2026 04:26:10 +0000 (0:00:05.467) 0:04:01.250 ******** 2026-03-26 04:26:25.527450 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-26 04:26:25.527461 | orchestrator | 2026-03-26 04:26:25.527471 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-03-26 04:26:25.527482 | orchestrator | Thursday 26 March 2026 04:26:13 +0000 (0:00:02.383) 0:04:03.634 ******** 2026-03-26 04:26:25.527492 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-26 04:26:25.527503 | orchestrator | 2026-03-26 04:26:25.527514 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-03-26 04:26:25.527524 | orchestrator | Thursday 26 March 2026 04:26:15 +0000 (0:00:02.620) 0:04:06.254 ******** 2026-03-26 04:26:25.527535 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-26 04:26:25.527545 | orchestrator | 2026-03-26 04:26:25.527566 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-03-26 04:26:25.527578 | orchestrator | Thursday 26 March 2026 04:26:19 +0000 (0:00:04.090) 0:04:10.345 ******** 2026-03-26 04:26:25.527588 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:26:25.527599 | orchestrator | 2026-03-26 04:26:25.527610 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-03-26 04:26:25.527628 | orchestrator | Thursday 26 March 2026 04:26:21 +0000 (0:00:01.145) 0:04:11.491 ******** 2026-03-26 04:26:25.527648 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-03-26 04:26:25.527668 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-03-26 04:26:25.527688 | orchestrator | 2026-03-26 04:26:25.527708 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-03-26 04:26:25.527727 | orchestrator | Thursday 26 March 2026 04:26:24 +0000 (0:00:02.908) 0:04:14.400 ******** 2026-03-26 04:26:25.527745 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:26:25.527775 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:26:50.900400 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:26:50.900512 | orchestrator | 2026-03-26 04:26:50.900529 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-03-26 04:26:50.900542 | orchestrator | Thursday 26 March 2026 04:26:25 +0000 (0:00:01.461) 0:04:15.861 ******** 2026-03-26 04:26:50.900553 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:26:50.900565 | orchestrator | ok: [testbed-node-1] 2026-03-26 04:26:50.900576 | orchestrator | ok: [testbed-node-2] 2026-03-26 04:26:50.900587 | orchestrator | 2026-03-26 04:26:50.900597 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-03-26 04:26:50.900608 | orchestrator | 2026-03-26 04:26:50.900619 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-03-26 04:26:50.900629 | orchestrator | Thursday 26 March 2026 04:26:27 +0000 (0:00:02.085) 0:04:17.946 ******** 2026-03-26 04:26:50.900640 | orchestrator | ok: [testbed-manager] 2026-03-26 04:26:50.900676 | orchestrator | 2026-03-26 04:26:50.900688 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-03-26 04:26:50.900699 | orchestrator | Thursday 26 March 2026 04:26:28 +0000 (0:00:01.326) 0:04:19.273 ******** 2026-03-26 04:26:50.900710 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-03-26 04:26:50.900721 | orchestrator | 2026-03-26 04:26:50.900732 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-03-26 04:26:50.900743 | orchestrator | Thursday 26 March 2026 04:26:30 +0000 (0:00:01.561) 0:04:20.835 ******** 2026-03-26 04:26:50.900753 | orchestrator | ok: [testbed-manager] 2026-03-26 04:26:50.900764 | orchestrator | 2026-03-26 04:26:50.900775 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-03-26 04:26:50.900785 | orchestrator | 2026-03-26 04:26:50.900796 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-03-26 04:26:50.900822 | orchestrator | Thursday 26 March 2026 04:26:35 +0000 (0:00:05.003) 0:04:25.839 ******** 2026-03-26 04:26:50.900833 | orchestrator | ok: [testbed-node-3] 2026-03-26 04:26:50.900844 | orchestrator | ok: [testbed-node-4] 2026-03-26 04:26:50.900857 | orchestrator | ok: [testbed-node-5] 2026-03-26 04:26:50.900875 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:26:50.900892 | orchestrator | ok: [testbed-node-1] 2026-03-26 04:26:50.900909 | orchestrator | ok: [testbed-node-2] 2026-03-26 04:26:50.900927 | orchestrator | 2026-03-26 04:26:50.900946 | orchestrator | TASK [Manage labels] *********************************************************** 2026-03-26 04:26:50.900965 | orchestrator | Thursday 26 March 2026 04:26:37 +0000 (0:00:01.875) 0:04:27.715 ******** 2026-03-26 04:26:50.900983 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-26 04:26:50.900999 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-26 04:26:50.901009 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-26 04:26:50.901020 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-26 04:26:50.901031 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-26 04:26:50.901041 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-26 04:26:50.901052 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-26 04:26:50.901062 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-26 04:26:50.901073 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-26 04:26:50.901084 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-26 04:26:50.901095 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-26 04:26:50.901105 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-26 04:26:50.901116 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-26 04:26:50.901126 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-26 04:26:50.901137 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-26 04:26:50.901148 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-26 04:26:50.901158 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-26 04:26:50.901168 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-26 04:26:50.901179 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-26 04:26:50.901189 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-26 04:26:50.901209 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-26 04:26:50.901220 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-26 04:26:50.901230 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-26 04:26:50.901241 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-26 04:26:50.901275 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-26 04:26:50.901286 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-26 04:26:50.901315 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-26 04:26:50.901327 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-26 04:26:50.901337 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-26 04:26:50.901348 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-26 04:26:50.901358 | orchestrator | 2026-03-26 04:26:50.901369 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-03-26 04:26:50.901380 | orchestrator | Thursday 26 March 2026 04:26:46 +0000 (0:00:09.159) 0:04:36.874 ******** 2026-03-26 04:26:50.901390 | orchestrator | skipping: [testbed-node-3] 2026-03-26 04:26:50.901401 | orchestrator | skipping: [testbed-node-4] 2026-03-26 04:26:50.901412 | orchestrator | skipping: [testbed-node-5] 2026-03-26 04:26:50.901422 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:26:50.901433 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:26:50.901444 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:26:50.901454 | orchestrator | 2026-03-26 04:26:50.901466 | orchestrator | TASK [Manage taints] *********************************************************** 2026-03-26 04:26:50.901477 | orchestrator | Thursday 26 March 2026 04:26:48 +0000 (0:00:01.861) 0:04:38.736 ******** 2026-03-26 04:26:50.901488 | orchestrator | skipping: [testbed-node-3] 2026-03-26 04:26:50.901498 | orchestrator | skipping: [testbed-node-4] 2026-03-26 04:26:50.901509 | orchestrator | skipping: [testbed-node-5] 2026-03-26 04:26:50.901519 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:26:50.901530 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:26:50.901540 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:26:50.901551 | orchestrator | 2026-03-26 04:26:50.901562 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-26 04:26:50.901573 | orchestrator | testbed-manager : ok=21  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-26 04:26:50.901586 | orchestrator | testbed-node-0 : ok=53  changed=14  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-26 04:26:50.901597 | orchestrator | testbed-node-1 : ok=38  changed=9  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-26 04:26:50.901608 | orchestrator | testbed-node-2 : ok=38  changed=9  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-26 04:26:50.901619 | orchestrator | testbed-node-3 : ok=16  changed=1  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-26 04:26:50.901629 | orchestrator | testbed-node-4 : ok=16  changed=1  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-26 04:26:50.901640 | orchestrator | testbed-node-5 : ok=16  changed=1  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-26 04:26:50.901651 | orchestrator | 2026-03-26 04:26:50.901661 | orchestrator | 2026-03-26 04:26:50.901672 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-26 04:26:50.901690 | orchestrator | Thursday 26 March 2026 04:26:50 +0000 (0:00:02.479) 0:04:41.215 ******** 2026-03-26 04:26:50.901701 | orchestrator | =============================================================================== 2026-03-26 04:26:50.901712 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 27.06s 2026-03-26 04:26:50.901722 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 23.37s 2026-03-26 04:26:50.901734 | orchestrator | Manage labels ----------------------------------------------------------- 9.16s 2026-03-26 04:26:50.901744 | orchestrator | k3s_agent : Manage k3s service ------------------------------------------ 7.93s 2026-03-26 04:26:50.901755 | orchestrator | k3s_server_post : Wait for Cilium resources ----------------------------- 5.47s 2026-03-26 04:26:50.901766 | orchestrator | k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites --- 5.41s 2026-03-26 04:26:50.901776 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 5.00s 2026-03-26 04:26:50.901787 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 4.29s 2026-03-26 04:26:50.901798 | orchestrator | k3s_server_post : Apply BGP manifests ----------------------------------- 4.09s 2026-03-26 04:26:50.901808 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 3.15s 2026-03-26 04:26:50.901819 | orchestrator | k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries --- 3.01s 2026-03-26 04:26:50.901830 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 2.94s 2026-03-26 04:26:50.901840 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 2.91s 2026-03-26 04:26:50.901851 | orchestrator | k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured --- 2.86s 2026-03-26 04:26:50.901861 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 2.69s 2026-03-26 04:26:50.901872 | orchestrator | k3s_server_post : Copy BGP manifests to first master -------------------- 2.62s 2026-03-26 04:26:50.901883 | orchestrator | kubectl : Install apt-transport-https package --------------------------- 2.57s 2026-03-26 04:26:50.901893 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 2.53s 2026-03-26 04:26:50.901911 | orchestrator | k3s_server : Copy vip manifest to first master -------------------------- 2.52s 2026-03-26 04:26:51.396458 | orchestrator | Manage taints ----------------------------------------------------------- 2.48s 2026-03-26 04:26:51.727484 | orchestrator | + [[ false == \f\a\l\s\e ]] 2026-03-26 04:26:51.727584 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/200-infrastructure.sh 2026-03-26 04:26:51.736397 | orchestrator | + set -e 2026-03-26 04:26:51.736470 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-26 04:26:51.736483 | orchestrator | ++ export INTERACTIVE=false 2026-03-26 04:26:51.736496 | orchestrator | ++ INTERACTIVE=false 2026-03-26 04:26:51.736978 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-26 04:26:51.737009 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-26 04:26:51.737028 | orchestrator | + osism apply openstackclient 2026-03-26 04:27:03.879561 | orchestrator | 2026-03-26 04:27:03 | INFO  | Task e1a66714-e3a4-43c7-84c4-5634d0d755e8 (openstackclient) was prepared for execution. 2026-03-26 04:27:03.879693 | orchestrator | 2026-03-26 04:27:03 | INFO  | It takes a moment until task e1a66714-e3a4-43c7-84c4-5634d0d755e8 (openstackclient) has been started and output is visible here. 2026-03-26 04:27:30.458314 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-03-26 04:27:30.458428 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-03-26 04:27:30.458457 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-03-26 04:27:30.458467 | orchestrator | (): 'NoneType' object is not subscriptable 2026-03-26 04:27:30.458517 | orchestrator | 2026-03-26 04:27:30.458543 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-03-26 04:27:30.458554 | orchestrator | 2026-03-26 04:27:30.458565 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-03-26 04:27:30.458575 | orchestrator | Thursday 26 March 2026 04:27:10 +0000 (0:00:01.545) 0:00:01.545 ******** 2026-03-26 04:27:30.458587 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-03-26 04:27:30.458599 | orchestrator | 2026-03-26 04:27:30.458610 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-03-26 04:27:30.458620 | orchestrator | Thursday 26 March 2026 04:27:10 +0000 (0:00:00.865) 0:00:02.410 ******** 2026-03-26 04:27:30.458631 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-03-26 04:27:30.458641 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient/data) 2026-03-26 04:27:30.458651 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-03-26 04:27:30.458662 | orchestrator | 2026-03-26 04:27:30.458672 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-03-26 04:27:30.458683 | orchestrator | Thursday 26 March 2026 04:27:12 +0000 (0:00:01.489) 0:00:03.900 ******** 2026-03-26 04:27:30.458693 | orchestrator | changed: [testbed-manager] 2026-03-26 04:27:30.458704 | orchestrator | 2026-03-26 04:27:30.458714 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-03-26 04:27:30.458725 | orchestrator | Thursday 26 March 2026 04:27:13 +0000 (0:00:01.268) 0:00:05.168 ******** 2026-03-26 04:27:30.458735 | orchestrator | ok: [testbed-manager] 2026-03-26 04:27:30.458747 | orchestrator | 2026-03-26 04:27:30.458757 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-03-26 04:27:30.458768 | orchestrator | Thursday 26 March 2026 04:27:14 +0000 (0:00:01.125) 0:00:06.293 ******** 2026-03-26 04:27:30.458778 | orchestrator | ok: [testbed-manager] 2026-03-26 04:27:30.458788 | orchestrator | 2026-03-26 04:27:30.458800 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-03-26 04:27:30.458811 | orchestrator | Thursday 26 March 2026 04:27:15 +0000 (0:00:00.931) 0:00:07.225 ******** 2026-03-26 04:27:30.458822 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_handler_task_start) in callback 2026-03-26 04:27:30.458832 | orchestrator | plugin (): 'NoneType' object is not subscriptable 2026-03-26 04:27:30.458855 | orchestrator | ok: [testbed-manager] 2026-03-26 04:27:30.458867 | orchestrator | 2026-03-26 04:27:30.458879 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-03-26 04:27:30.458891 | orchestrator | Thursday 26 March 2026 04:27:16 +0000 (0:00:00.713) 0:00:07.938 ******** 2026-03-26 04:27:30.458903 | orchestrator | changed: [testbed-manager] 2026-03-26 04:27:30.458915 | orchestrator | 2026-03-26 04:27:30.458926 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-03-26 04:27:30.458937 | orchestrator | Thursday 26 March 2026 04:27:26 +0000 (0:00:10.507) 0:00:18.446 ******** 2026-03-26 04:27:30.458947 | orchestrator | changed: [testbed-manager] 2026-03-26 04:27:30.458958 | orchestrator | 2026-03-26 04:27:30.458968 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-03-26 04:27:30.458978 | orchestrator | Thursday 26 March 2026 04:27:28 +0000 (0:00:01.309) 0:00:19.755 ******** 2026-03-26 04:27:30.458989 | orchestrator | changed: [testbed-manager] 2026-03-26 04:27:30.458999 | orchestrator | 2026-03-26 04:27:30.459009 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-03-26 04:27:30.459020 | orchestrator | Thursday 26 March 2026 04:27:28 +0000 (0:00:00.616) 0:00:20.371 ******** 2026-03-26 04:27:30.459030 | orchestrator | ok: [testbed-manager] 2026-03-26 04:27:30.459049 | orchestrator | 2026-03-26 04:27:30.459059 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-26 04:27:30.459070 | orchestrator | testbed-manager : ok=10  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-26 04:27:30.459081 | orchestrator | 2026-03-26 04:27:30.459092 | orchestrator | 2026-03-26 04:27:30.459103 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-26 04:27:30.459113 | orchestrator | Thursday 26 March 2026 04:27:30 +0000 (0:00:01.156) 0:00:21.528 ******** 2026-03-26 04:27:30.459147 | orchestrator | =============================================================================== 2026-03-26 04:27:30.459159 | orchestrator | osism.services.openstackclient : Restart openstackclient service ------- 10.51s 2026-03-26 04:27:30.459170 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.49s 2026-03-26 04:27:30.459180 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 1.31s 2026-03-26 04:27:30.459191 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 1.27s 2026-03-26 04:27:30.459201 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 1.16s 2026-03-26 04:27:30.459212 | orchestrator | osism.services.openstackclient : Manage openstackclient service --------- 1.13s 2026-03-26 04:27:30.459243 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 0.93s 2026-03-26 04:27:30.459255 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.87s 2026-03-26 04:27:30.459266 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.71s 2026-03-26 04:27:30.459277 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.62s 2026-03-26 04:27:30.795268 | orchestrator | + osism apply -a upgrade common 2026-03-26 04:27:32.859523 | orchestrator | 2026-03-26 04:27:32 | INFO  | Task d9da0468-b2fe-479e-8776-94e7f6347ff6 (common) was prepared for execution. 2026-03-26 04:27:32.859666 | orchestrator | 2026-03-26 04:27:32 | INFO  | It takes a moment until task d9da0468-b2fe-479e-8776-94e7f6347ff6 (common) has been started and output is visible here. 2026-03-26 04:27:51.980483 | orchestrator | 2026-03-26 04:27:51.980597 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-03-26 04:27:51.980613 | orchestrator | 2026-03-26 04:27:51.980625 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-03-26 04:27:51.980636 | orchestrator | Thursday 26 March 2026 04:27:39 +0000 (0:00:02.502) 0:00:02.502 ******** 2026-03-26 04:27:51.980648 | orchestrator | included: /ansible/roles/common/tasks/upgrade.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-26 04:27:51.980660 | orchestrator | 2026-03-26 04:27:51.980695 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-03-26 04:27:51.980706 | orchestrator | Thursday 26 March 2026 04:27:43 +0000 (0:00:03.336) 0:00:05.838 ******** 2026-03-26 04:27:51.980718 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-26 04:27:51.980729 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-26 04:27:51.980740 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-26 04:27:51.980751 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-26 04:27:51.980762 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-26 04:27:51.980773 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-26 04:27:51.980784 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-26 04:27:51.980794 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-26 04:27:51.980805 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-26 04:27:51.980815 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-26 04:27:51.980852 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-26 04:27:51.980863 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-26 04:27:51.980873 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-26 04:27:51.980884 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-26 04:27:51.980894 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-26 04:27:51.980905 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-26 04:27:51.980915 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-26 04:27:51.980926 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-26 04:27:51.980936 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-26 04:27:51.980946 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-26 04:27:51.980957 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-26 04:27:51.980967 | orchestrator | 2026-03-26 04:27:51.980978 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-03-26 04:27:51.980988 | orchestrator | Thursday 26 March 2026 04:27:46 +0000 (0:00:03.585) 0:00:09.424 ******** 2026-03-26 04:27:51.980999 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-26 04:27:51.981011 | orchestrator | 2026-03-26 04:27:51.981021 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-03-26 04:27:51.981031 | orchestrator | Thursday 26 March 2026 04:27:49 +0000 (0:00:02.808) 0:00:12.232 ******** 2026-03-26 04:27:51.981047 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-26 04:27:51.981103 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-26 04:27:51.981149 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-26 04:27:51.981162 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-26 04:27:51.981183 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-26 04:27:51.981194 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-26 04:27:51.981379 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-26 04:27:51.981398 | orchestrator | ok: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:27:51.981410 | orchestrator | ok: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:27:51.981438 | orchestrator | ok: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:27:54.478537 | orchestrator | ok: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:27:54.478745 | orchestrator | ok: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:27:54.478778 | orchestrator | ok: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:27:54.478800 | orchestrator | ok: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:27:54.478849 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:27:54.478875 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:27:54.478896 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:27:54.478945 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:27:54.478981 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:27:54.479026 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:27:54.479138 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:27:54.479164 | orchestrator | 2026-03-26 04:27:54.479182 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-03-26 04:27:54.479195 | orchestrator | Thursday 26 March 2026 04:27:53 +0000 (0:00:04.313) 0:00:16.546 ******** 2026-03-26 04:27:54.479212 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-26 04:27:54.479227 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-26 04:27:54.479241 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-26 04:27:54.479254 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:27:54.479295 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:27:56.626808 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:27:56.626907 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:27:56.626934 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:27:56.626981 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:27:56.627000 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-26 04:27:56.627012 | orchestrator | skipping: [testbed-manager] 2026-03-26 04:27:56.627025 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:27:56.627036 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:27:56.627139 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:27:56.627170 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:27:56.627181 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:27:56.627217 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-26 04:27:56.627229 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-26 04:27:56.627242 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:27:56.627263 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:27:56.627285 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:27:56.627305 | orchestrator | skipping: [testbed-node-4] 2026-03-26 04:27:56.627334 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-26 04:27:56.627362 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:27:56.627380 | orchestrator | skipping: [testbed-node-3] 2026-03-26 04:27:56.627403 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:27:56.627435 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:27:59.908245 | orchestrator | skipping: [testbed-node-5] 2026-03-26 04:27:59.908316 | orchestrator | 2026-03-26 04:27:59.908327 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-03-26 04:27:59.908335 | orchestrator | Thursday 26 March 2026 04:27:56 +0000 (0:00:02.875) 0:00:19.421 ******** 2026-03-26 04:27:59.908345 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-26 04:27:59.908355 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:27:59.908363 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-26 04:27:59.908382 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:27:59.908404 | orchestrator | skipping: [testbed-manager] 2026-03-26 04:27:59.908413 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:27:59.908420 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-26 04:27:59.908428 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:27:59.908449 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:27:59.908457 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:27:59.908465 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-26 04:27:59.908472 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-26 04:27:59.908480 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:27:59.908492 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:27:59.908506 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:27:59.908513 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:27:59.908521 | orchestrator | skipping: [testbed-node-3] 2026-03-26 04:27:59.908529 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:27:59.908541 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:28:12.646723 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:28:12.646869 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-26 04:28:12.646915 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-26 04:28:12.646943 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:28:12.647072 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:28:12.647100 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:28:12.647121 | orchestrator | skipping: [testbed-node-4] 2026-03-26 04:28:12.647136 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:28:12.647146 | orchestrator | skipping: [testbed-node-5] 2026-03-26 04:28:12.647156 | orchestrator | 2026-03-26 04:28:12.647168 | orchestrator | TASK [common : Ensure /var/log/journal exists on EL10 systems] ***************** 2026-03-26 04:28:12.647180 | orchestrator | Thursday 26 March 2026 04:27:59 +0000 (0:00:03.287) 0:00:22.708 ******** 2026-03-26 04:28:12.647191 | orchestrator | skipping: [testbed-manager] 2026-03-26 04:28:12.647202 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:28:12.647213 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:28:12.647224 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:28:12.647235 | orchestrator | skipping: [testbed-node-3] 2026-03-26 04:28:12.647247 | orchestrator | skipping: [testbed-node-4] 2026-03-26 04:28:12.647260 | orchestrator | skipping: [testbed-node-5] 2026-03-26 04:28:12.647271 | orchestrator | 2026-03-26 04:28:12.647284 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-03-26 04:28:12.647296 | orchestrator | Thursday 26 March 2026 04:28:02 +0000 (0:00:02.349) 0:00:25.058 ******** 2026-03-26 04:28:12.647308 | orchestrator | skipping: [testbed-manager] 2026-03-26 04:28:12.647321 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:28:12.647332 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:28:12.647345 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:28:12.647378 | orchestrator | skipping: [testbed-node-3] 2026-03-26 04:28:12.647390 | orchestrator | skipping: [testbed-node-4] 2026-03-26 04:28:12.647402 | orchestrator | skipping: [testbed-node-5] 2026-03-26 04:28:12.647414 | orchestrator | 2026-03-26 04:28:12.647426 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-03-26 04:28:12.647439 | orchestrator | Thursday 26 March 2026 04:28:04 +0000 (0:00:02.157) 0:00:27.215 ******** 2026-03-26 04:28:12.647451 | orchestrator | skipping: [testbed-manager] 2026-03-26 04:28:12.647461 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:28:12.647472 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:28:12.647482 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:28:12.647493 | orchestrator | skipping: [testbed-node-3] 2026-03-26 04:28:12.647503 | orchestrator | skipping: [testbed-node-4] 2026-03-26 04:28:12.647526 | orchestrator | skipping: [testbed-node-5] 2026-03-26 04:28:12.647537 | orchestrator | 2026-03-26 04:28:12.647548 | orchestrator | TASK [common : Copying over kolla.target] ************************************** 2026-03-26 04:28:12.647558 | orchestrator | Thursday 26 March 2026 04:28:06 +0000 (0:00:02.257) 0:00:29.473 ******** 2026-03-26 04:28:12.647569 | orchestrator | changed: [testbed-manager] 2026-03-26 04:28:12.647580 | orchestrator | changed: [testbed-node-0] 2026-03-26 04:28:12.647590 | orchestrator | changed: [testbed-node-1] 2026-03-26 04:28:12.647600 | orchestrator | changed: [testbed-node-2] 2026-03-26 04:28:12.647616 | orchestrator | changed: [testbed-node-3] 2026-03-26 04:28:12.647635 | orchestrator | changed: [testbed-node-4] 2026-03-26 04:28:12.647651 | orchestrator | changed: [testbed-node-5] 2026-03-26 04:28:12.647669 | orchestrator | 2026-03-26 04:28:12.647687 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-03-26 04:28:12.647705 | orchestrator | Thursday 26 March 2026 04:28:09 +0000 (0:00:02.933) 0:00:32.406 ******** 2026-03-26 04:28:12.647724 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-26 04:28:12.647752 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-26 04:28:12.647775 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-26 04:28:12.647795 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-26 04:28:12.647814 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-26 04:28:12.647845 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:28:15.492925 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-26 04:28:15.493082 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-26 04:28:15.493129 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:28:15.493143 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:28:15.493154 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:28:15.493167 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:28:15.493181 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:28:15.493230 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:28:15.493244 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:28:15.493256 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:28:15.493275 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:28:15.493287 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:28:15.493298 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:28:15.493309 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:28:15.493327 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:28:15.493339 | orchestrator | 2026-03-26 04:28:15.493352 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-03-26 04:28:15.493364 | orchestrator | Thursday 26 March 2026 04:28:14 +0000 (0:00:04.925) 0:00:37.331 ******** 2026-03-26 04:28:15.493375 | orchestrator | [WARNING]: Skipped 2026-03-26 04:28:15.493387 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-03-26 04:28:15.493405 | orchestrator | to this access issue: 2026-03-26 04:28:34.443772 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-03-26 04:28:34.443925 | orchestrator | directory 2026-03-26 04:28:34.444048 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-26 04:28:34.444072 | orchestrator | 2026-03-26 04:28:34.444090 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-03-26 04:28:34.444109 | orchestrator | Thursday 26 March 2026 04:28:16 +0000 (0:00:02.395) 0:00:39.727 ******** 2026-03-26 04:28:34.444127 | orchestrator | [WARNING]: Skipped 2026-03-26 04:28:34.444145 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-03-26 04:28:34.444162 | orchestrator | to this access issue: 2026-03-26 04:28:34.444179 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-03-26 04:28:34.444197 | orchestrator | directory 2026-03-26 04:28:34.444216 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-26 04:28:34.444234 | orchestrator | 2026-03-26 04:28:34.444252 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-03-26 04:28:34.444272 | orchestrator | Thursday 26 March 2026 04:28:18 +0000 (0:00:01.857) 0:00:41.585 ******** 2026-03-26 04:28:34.444291 | orchestrator | [WARNING]: Skipped 2026-03-26 04:28:34.444311 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-03-26 04:28:34.444329 | orchestrator | to this access issue: 2026-03-26 04:28:34.444348 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-03-26 04:28:34.444369 | orchestrator | directory 2026-03-26 04:28:34.444390 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-26 04:28:34.444410 | orchestrator | 2026-03-26 04:28:34.444428 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-03-26 04:28:34.444449 | orchestrator | Thursday 26 March 2026 04:28:20 +0000 (0:00:01.853) 0:00:43.438 ******** 2026-03-26 04:28:34.444468 | orchestrator | [WARNING]: Skipped 2026-03-26 04:28:34.444487 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-03-26 04:28:34.444499 | orchestrator | to this access issue: 2026-03-26 04:28:34.444510 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-03-26 04:28:34.444521 | orchestrator | directory 2026-03-26 04:28:34.444551 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-26 04:28:34.444563 | orchestrator | 2026-03-26 04:28:34.444573 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-03-26 04:28:34.444584 | orchestrator | Thursday 26 March 2026 04:28:22 +0000 (0:00:01.835) 0:00:45.273 ******** 2026-03-26 04:28:34.444595 | orchestrator | changed: [testbed-manager] 2026-03-26 04:28:34.444606 | orchestrator | changed: [testbed-node-0] 2026-03-26 04:28:34.444617 | orchestrator | changed: [testbed-node-1] 2026-03-26 04:28:34.444628 | orchestrator | changed: [testbed-node-2] 2026-03-26 04:28:34.444639 | orchestrator | changed: [testbed-node-3] 2026-03-26 04:28:34.444650 | orchestrator | changed: [testbed-node-4] 2026-03-26 04:28:34.444687 | orchestrator | changed: [testbed-node-5] 2026-03-26 04:28:34.444699 | orchestrator | 2026-03-26 04:28:34.444710 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-03-26 04:28:34.444721 | orchestrator | Thursday 26 March 2026 04:28:26 +0000 (0:00:04.267) 0:00:49.541 ******** 2026-03-26 04:28:34.444731 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-26 04:28:34.444743 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-26 04:28:34.444754 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-26 04:28:34.444765 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-26 04:28:34.444775 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-26 04:28:34.444786 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-26 04:28:34.444797 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-26 04:28:34.444808 | orchestrator | 2026-03-26 04:28:34.444818 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-03-26 04:28:34.444829 | orchestrator | Thursday 26 March 2026 04:28:29 +0000 (0:00:03.102) 0:00:52.643 ******** 2026-03-26 04:28:34.444840 | orchestrator | ok: [testbed-manager] 2026-03-26 04:28:34.444851 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:28:34.444862 | orchestrator | ok: [testbed-node-1] 2026-03-26 04:28:34.444872 | orchestrator | ok: [testbed-node-2] 2026-03-26 04:28:34.444883 | orchestrator | ok: [testbed-node-3] 2026-03-26 04:28:34.444893 | orchestrator | ok: [testbed-node-4] 2026-03-26 04:28:34.444904 | orchestrator | ok: [testbed-node-5] 2026-03-26 04:28:34.444913 | orchestrator | 2026-03-26 04:28:34.444923 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-03-26 04:28:34.444958 | orchestrator | Thursday 26 March 2026 04:28:32 +0000 (0:00:02.866) 0:00:55.509 ******** 2026-03-26 04:28:34.444971 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-26 04:28:34.445009 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:28:34.445021 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-26 04:28:34.445036 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:28:34.445056 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:28:34.445069 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-26 04:28:34.445079 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:28:34.445089 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:28:34.445107 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-26 04:28:44.399632 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:28:44.399767 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-26 04:28:44.399835 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:28:44.399858 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:28:44.399878 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-26 04:28:44.399894 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:28:44.399942 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-26 04:28:44.399984 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:28:44.400004 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:28:44.400034 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:28:44.400053 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:28:44.400070 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:28:44.400088 | orchestrator | 2026-03-26 04:28:44.400107 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-03-26 04:28:44.400121 | orchestrator | Thursday 26 March 2026 04:28:35 +0000 (0:00:02.813) 0:00:58.323 ******** 2026-03-26 04:28:44.400131 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-26 04:28:44.400143 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-26 04:28:44.400154 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-26 04:28:44.400165 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-26 04:28:44.400176 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-26 04:28:44.400188 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-26 04:28:44.400198 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-26 04:28:44.400209 | orchestrator | 2026-03-26 04:28:44.400229 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-03-26 04:28:44.400241 | orchestrator | Thursday 26 March 2026 04:28:38 +0000 (0:00:03.118) 0:01:01.441 ******** 2026-03-26 04:28:44.400252 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-26 04:28:44.400263 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-26 04:28:44.400273 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-26 04:28:44.400284 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-26 04:28:44.400295 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-26 04:28:44.400306 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-26 04:28:44.400317 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-26 04:28:44.400328 | orchestrator | 2026-03-26 04:28:44.400339 | orchestrator | TASK [service-check-containers : common | Check containers] ******************** 2026-03-26 04:28:44.400349 | orchestrator | Thursday 26 March 2026 04:28:41 +0000 (0:00:03.309) 0:01:04.751 ******** 2026-03-26 04:28:44.400377 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-26 04:28:46.427636 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-26 04:28:46.427859 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-26 04:28:46.427881 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-26 04:28:46.427891 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-26 04:28:46.427925 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-26 04:28:46.427936 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-26 04:28:46.427947 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:28:46.428000 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:28:46.428012 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:28:46.428027 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:28:46.428037 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:28:46.428047 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:28:46.428059 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:28:46.428077 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:28:46.428095 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:28:49.290283 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:28:49.290404 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:28:49.290422 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:28:49.290434 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:28:49.290445 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:28:49.290457 | orchestrator | 2026-03-26 04:28:49.290470 | orchestrator | TASK [service-check-containers : common | Notify handlers to restart containers] *** 2026-03-26 04:28:49.290482 | orchestrator | Thursday 26 March 2026 04:28:46 +0000 (0:00:04.481) 0:01:09.233 ******** 2026-03-26 04:28:49.290494 | orchestrator | changed: [testbed-manager] => { 2026-03-26 04:28:49.290505 | orchestrator |  "msg": "Notifying handlers" 2026-03-26 04:28:49.290516 | orchestrator | } 2026-03-26 04:28:49.290527 | orchestrator | changed: [testbed-node-0] => { 2026-03-26 04:28:49.290561 | orchestrator |  "msg": "Notifying handlers" 2026-03-26 04:28:49.290572 | orchestrator | } 2026-03-26 04:28:49.290583 | orchestrator | changed: [testbed-node-1] => { 2026-03-26 04:28:49.290593 | orchestrator |  "msg": "Notifying handlers" 2026-03-26 04:28:49.290604 | orchestrator | } 2026-03-26 04:28:49.290615 | orchestrator | changed: [testbed-node-2] => { 2026-03-26 04:28:49.290625 | orchestrator |  "msg": "Notifying handlers" 2026-03-26 04:28:49.290636 | orchestrator | } 2026-03-26 04:28:49.290646 | orchestrator | changed: [testbed-node-3] => { 2026-03-26 04:28:49.290657 | orchestrator |  "msg": "Notifying handlers" 2026-03-26 04:28:49.290668 | orchestrator | } 2026-03-26 04:28:49.290678 | orchestrator | changed: [testbed-node-4] => { 2026-03-26 04:28:49.290689 | orchestrator |  "msg": "Notifying handlers" 2026-03-26 04:28:49.290699 | orchestrator | } 2026-03-26 04:28:49.290710 | orchestrator | changed: [testbed-node-5] => { 2026-03-26 04:28:49.290720 | orchestrator |  "msg": "Notifying handlers" 2026-03-26 04:28:49.290731 | orchestrator | } 2026-03-26 04:28:49.290741 | orchestrator | 2026-03-26 04:28:49.290752 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-26 04:28:49.290763 | orchestrator | Thursday 26 March 2026 04:28:48 +0000 (0:00:02.103) 0:01:11.337 ******** 2026-03-26 04:28:49.290776 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-26 04:28:49.290810 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:28:49.290825 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:28:49.290838 | orchestrator | skipping: [testbed-manager] 2026-03-26 04:28:49.290857 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-26 04:28:49.290870 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:28:49.290917 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:28:49.290940 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:28:49.290962 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-26 04:28:49.290982 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:28:49.291002 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:28:49.291016 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:28:49.291037 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-26 04:28:58.149405 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:28:58.149528 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:28:58.149567 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:28:58.149593 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-26 04:28:58.149606 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:28:58.149618 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:28:58.149629 | orchestrator | skipping: [testbed-node-3] 2026-03-26 04:28:58.149641 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-26 04:28:58.149652 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:28:58.149681 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:28:58.149694 | orchestrator | skipping: [testbed-node-4] 2026-03-26 04:28:58.149710 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-26 04:28:58.149731 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:28:58.149743 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:28:58.149754 | orchestrator | skipping: [testbed-node-5] 2026-03-26 04:28:58.149765 | orchestrator | 2026-03-26 04:28:58.149777 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-26 04:28:58.149790 | orchestrator | Thursday 26 March 2026 04:28:51 +0000 (0:00:03.054) 0:01:14.392 ******** 2026-03-26 04:28:58.149801 | orchestrator | 2026-03-26 04:28:58.149812 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-26 04:28:58.149824 | orchestrator | Thursday 26 March 2026 04:28:52 +0000 (0:00:00.446) 0:01:14.838 ******** 2026-03-26 04:28:58.149834 | orchestrator | 2026-03-26 04:28:58.149845 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-26 04:28:58.149856 | orchestrator | Thursday 26 March 2026 04:28:52 +0000 (0:00:00.441) 0:01:15.280 ******** 2026-03-26 04:28:58.149866 | orchestrator | 2026-03-26 04:28:58.149916 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-26 04:28:58.149929 | orchestrator | Thursday 26 March 2026 04:28:52 +0000 (0:00:00.442) 0:01:15.722 ******** 2026-03-26 04:28:58.149941 | orchestrator | 2026-03-26 04:28:58.149953 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-26 04:28:58.149965 | orchestrator | Thursday 26 March 2026 04:28:53 +0000 (0:00:00.461) 0:01:16.183 ******** 2026-03-26 04:28:58.149977 | orchestrator | 2026-03-26 04:28:58.149989 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-26 04:28:58.150000 | orchestrator | Thursday 26 March 2026 04:28:54 +0000 (0:00:00.715) 0:01:16.899 ******** 2026-03-26 04:28:58.150012 | orchestrator | 2026-03-26 04:28:58.150089 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-26 04:28:58.150102 | orchestrator | Thursday 26 March 2026 04:28:54 +0000 (0:00:00.443) 0:01:17.342 ******** 2026-03-26 04:28:58.150114 | orchestrator | 2026-03-26 04:28:58.150127 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-03-26 04:28:58.150140 | orchestrator | Thursday 26 March 2026 04:28:55 +0000 (0:00:00.829) 0:01:18.172 ******** 2026-03-26 04:28:58.150178 | orchestrator | fatal: [testbed-manager]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_q4x7yxbs/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_q4x7yxbs/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_q4x7yxbs/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.8.20251208 not found\")\\n'"} 2026-03-26 04:29:01.586631 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_dnvx1h1y/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_dnvx1h1y/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_dnvx1h1y/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.8.20251208 not found\")\\n'"} 2026-03-26 04:29:01.586779 | orchestrator | fatal: [testbed-node-3]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_zh6b6vk4/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_zh6b6vk4/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_zh6b6vk4/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.8.20251208 not found\")\\n'"} 2026-03-26 04:29:01.586822 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_bqyc2qzf/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_bqyc2qzf/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_bqyc2qzf/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.8.20251208 not found\")\\n'"} 2026-03-26 04:29:01.586902 | orchestrator | fatal: [testbed-node-4]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_c5169jl2/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_c5169jl2/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_c5169jl2/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.8.20251208 not found\")\\n'"} 2026-03-26 04:29:02.107740 | orchestrator | 2026-03-26 04:29:02 | INFO  | Task 62611976-a838-4c25-9ed0-48c8dae1f80c (common) was prepared for execution. 2026-03-26 04:29:02.107841 | orchestrator | 2026-03-26 04:29:02 | INFO  | It takes a moment until task 62611976-a838-4c25-9ed0-48c8dae1f80c (common) has been started and output is visible here. 2026-03-26 04:29:07.958430 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_tymil31q/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_tymil31q/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_tymil31q/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.8.20251208 not found\")\\n'"} 2026-03-26 04:29:07.958549 | orchestrator | fatal: [testbed-node-5]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_jm1xy0ab/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_jm1xy0ab/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_jm1xy0ab/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.8.20251208 not found\")\\n'"} 2026-03-26 04:29:07.958557 | orchestrator | 2026-03-26 04:29:07.958563 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-26 04:29:07.958580 | orchestrator | testbed-manager : ok=18  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-03-26 04:29:07.958585 | orchestrator | testbed-node-0 : ok=14  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-03-26 04:29:07.958589 | orchestrator | testbed-node-1 : ok=14  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-03-26 04:29:07.958602 | orchestrator | testbed-node-2 : ok=14  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-03-26 04:29:07.958606 | orchestrator | testbed-node-3 : ok=14  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-03-26 04:29:07.958609 | orchestrator | testbed-node-4 : ok=14  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-03-26 04:29:07.958616 | orchestrator | testbed-node-5 : ok=14  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-03-26 04:29:07.958620 | orchestrator | 2026-03-26 04:29:07.958623 | orchestrator | 2026-03-26 04:29:07.958627 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-26 04:29:07.958631 | orchestrator | Thursday 26 March 2026 04:29:01 +0000 (0:00:06.225) 0:01:24.397 ******** 2026-03-26 04:29:07.958635 | orchestrator | =============================================================================== 2026-03-26 04:29:07.958638 | orchestrator | common : Restart fluentd container -------------------------------------- 6.23s 2026-03-26 04:29:07.958642 | orchestrator | common : Copying over config.json files for services -------------------- 4.93s 2026-03-26 04:29:07.958646 | orchestrator | service-check-containers : common | Check containers -------------------- 4.48s 2026-03-26 04:29:07.958650 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 4.31s 2026-03-26 04:29:07.958653 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 4.27s 2026-03-26 04:29:07.958657 | orchestrator | common : Flush handlers ------------------------------------------------- 3.78s 2026-03-26 04:29:07.958661 | orchestrator | common : Ensuring config directories exist ------------------------------ 3.59s 2026-03-26 04:29:07.958664 | orchestrator | common : include_tasks -------------------------------------------------- 3.34s 2026-03-26 04:29:07.958668 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 3.31s 2026-03-26 04:29:07.958672 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 3.29s 2026-03-26 04:29:07.958676 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 3.12s 2026-03-26 04:29:07.958679 | orchestrator | common : Copying over cron logrotate config file ------------------------ 3.10s 2026-03-26 04:29:07.958683 | orchestrator | service-check-containers : Include tasks -------------------------------- 3.05s 2026-03-26 04:29:07.958687 | orchestrator | common : Copying over kolla.target -------------------------------------- 2.93s 2026-03-26 04:29:07.958691 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 2.88s 2026-03-26 04:29:07.958696 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 2.87s 2026-03-26 04:29:07.958700 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.81s 2026-03-26 04:29:07.958703 | orchestrator | common : include_tasks -------------------------------------------------- 2.81s 2026-03-26 04:29:07.958707 | orchestrator | common : Find custom fluentd input config files ------------------------- 2.40s 2026-03-26 04:29:07.958726 | orchestrator | common : Ensure /var/log/journal exists on EL10 systems ----------------- 2.35s 2026-03-26 04:29:07.958730 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-03-26 04:29:07.958735 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-03-26 04:29:07.958743 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-03-26 04:29:07.958747 | orchestrator | (): 'NoneType' object is not subscriptable 2026-03-26 04:29:07.958758 | orchestrator | 2026-03-26 04:29:07.958766 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-03-26 04:29:17.400247 | orchestrator | 2026-03-26 04:29:17.400361 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-03-26 04:29:17.400376 | orchestrator | Thursday 26 March 2026 04:29:07 +0000 (0:00:01.579) 0:00:01.581 ******** 2026-03-26 04:29:17.400389 | orchestrator | included: /ansible/roles/common/tasks/upgrade.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-26 04:29:17.400402 | orchestrator | 2026-03-26 04:29:17.400413 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-03-26 04:29:17.400424 | orchestrator | Thursday 26 March 2026 04:29:10 +0000 (0:00:02.455) 0:00:04.036 ******** 2026-03-26 04:29:17.400435 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-26 04:29:17.400448 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-26 04:29:17.400468 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-26 04:29:17.400489 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-26 04:29:17.400509 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-26 04:29:17.400526 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-26 04:29:17.400545 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-26 04:29:17.400566 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-26 04:29:17.400587 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-26 04:29:17.400607 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-26 04:29:17.400628 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-26 04:29:17.400640 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-26 04:29:17.400651 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-26 04:29:17.400680 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-26 04:29:17.400692 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-26 04:29:17.400702 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-26 04:29:17.400713 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-26 04:29:17.400723 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-26 04:29:17.400734 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-26 04:29:17.400744 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-26 04:29:17.400755 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-26 04:29:17.400765 | orchestrator | 2026-03-26 04:29:17.400776 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-03-26 04:29:17.400790 | orchestrator | Thursday 26 March 2026 04:29:12 +0000 (0:00:02.257) 0:00:06.294 ******** 2026-03-26 04:29:17.400802 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-26 04:29:17.400816 | orchestrator | 2026-03-26 04:29:17.400854 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-03-26 04:29:17.400866 | orchestrator | Thursday 26 March 2026 04:29:14 +0000 (0:00:02.087) 0:00:08.381 ******** 2026-03-26 04:29:17.400882 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-26 04:29:17.400919 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-26 04:29:17.400951 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-26 04:29:17.400966 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-26 04:29:17.400979 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-26 04:29:17.400996 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-26 04:29:17.401010 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-26 04:29:17.401022 | orchestrator | ok: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:29:17.401044 | orchestrator | ok: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:29:17.401066 | orchestrator | ok: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:29:18.818639 | orchestrator | ok: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:29:18.818747 | orchestrator | ok: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:29:18.818781 | orchestrator | ok: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:29:18.818794 | orchestrator | ok: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:29:18.818910 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:29:18.818940 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:29:18.818962 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:29:18.819008 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:29:18.819027 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:29:18.819039 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:29:18.819057 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:29:18.819069 | orchestrator | 2026-03-26 04:29:18.819082 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-03-26 04:29:18.819093 | orchestrator | Thursday 26 March 2026 04:29:18 +0000 (0:00:03.314) 0:00:11.696 ******** 2026-03-26 04:29:18.819106 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-26 04:29:18.819131 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-26 04:29:18.819143 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:29:18.819154 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:29:18.819174 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:29:19.623171 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-26 04:29:19.623270 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:29:19.623286 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:29:19.623299 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:29:19.623329 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-26 04:29:19.623340 | orchestrator | skipping: [testbed-manager] 2026-03-26 04:29:19.623350 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:29:19.623360 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:29:19.623370 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-26 04:29:19.623379 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:29:19.623404 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:29:19.623415 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:29:19.623466 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:29:19.623482 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-26 04:29:19.623500 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:29:19.623510 | orchestrator | skipping: [testbed-node-3] 2026-03-26 04:29:19.623519 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:29:19.623530 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-26 04:29:19.623539 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:29:19.623549 | orchestrator | skipping: [testbed-node-4] 2026-03-26 04:29:19.623564 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:29:20.807132 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:29:20.807262 | orchestrator | skipping: [testbed-node-5] 2026-03-26 04:29:20.807288 | orchestrator | 2026-03-26 04:29:20.807310 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-03-26 04:29:20.807363 | orchestrator | Thursday 26 March 2026 04:29:19 +0000 (0:00:01.538) 0:00:13.234 ******** 2026-03-26 04:29:20.807417 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-26 04:29:20.807441 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-26 04:29:20.807462 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:29:20.807484 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:29:20.807505 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-26 04:29:20.807526 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:29:20.807571 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:29:20.807607 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:29:20.807629 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:29:20.807650 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-26 04:29:20.807673 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:29:20.807696 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-26 04:29:20.807718 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:29:20.807741 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:29:20.807775 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:29:29.341364 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:29:29.341483 | orchestrator | skipping: [testbed-manager] 2026-03-26 04:29:29.341500 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:29:29.341511 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:29:29.341522 | orchestrator | skipping: [testbed-node-3] 2026-03-26 04:29:29.341552 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-26 04:29:29.341568 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:29:29.341580 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-26 04:29:29.341591 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:29:29.341603 | orchestrator | skipping: [testbed-node-4] 2026-03-26 04:29:29.341614 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:29:29.341626 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:29:29.341657 | orchestrator | skipping: [testbed-node-5] 2026-03-26 04:29:29.341668 | orchestrator | 2026-03-26 04:29:29.341680 | orchestrator | TASK [common : Ensure /var/log/journal exists on EL10 systems] ***************** 2026-03-26 04:29:29.341693 | orchestrator | Thursday 26 March 2026 04:29:21 +0000 (0:00:02.256) 0:00:15.490 ******** 2026-03-26 04:29:29.341720 | orchestrator | skipping: [testbed-manager] 2026-03-26 04:29:29.341731 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:29:29.341742 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:29:29.341752 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:29:29.341763 | orchestrator | skipping: [testbed-node-3] 2026-03-26 04:29:29.341773 | orchestrator | skipping: [testbed-node-4] 2026-03-26 04:29:29.341784 | orchestrator | skipping: [testbed-node-5] 2026-03-26 04:29:29.341856 | orchestrator | 2026-03-26 04:29:29.341868 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-03-26 04:29:29.341879 | orchestrator | Thursday 26 March 2026 04:29:22 +0000 (0:00:01.071) 0:00:16.562 ******** 2026-03-26 04:29:29.341889 | orchestrator | skipping: [testbed-manager] 2026-03-26 04:29:29.341902 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:29:29.341914 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:29:29.341927 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:29:29.341939 | orchestrator | skipping: [testbed-node-3] 2026-03-26 04:29:29.341951 | orchestrator | skipping: [testbed-node-4] 2026-03-26 04:29:29.341964 | orchestrator | skipping: [testbed-node-5] 2026-03-26 04:29:29.341976 | orchestrator | 2026-03-26 04:29:29.341994 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-03-26 04:29:29.342007 | orchestrator | Thursday 26 March 2026 04:29:23 +0000 (0:00:00.929) 0:00:17.492 ******** 2026-03-26 04:29:29.342080 | orchestrator | skipping: [testbed-manager] 2026-03-26 04:29:29.342095 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:29:29.342107 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:29:29.342119 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:29:29.342131 | orchestrator | skipping: [testbed-node-3] 2026-03-26 04:29:29.342143 | orchestrator | skipping: [testbed-node-4] 2026-03-26 04:29:29.342155 | orchestrator | skipping: [testbed-node-5] 2026-03-26 04:29:29.342167 | orchestrator | 2026-03-26 04:29:29.342180 | orchestrator | TASK [common : Copying over kolla.target] ************************************** 2026-03-26 04:29:29.342193 | orchestrator | Thursday 26 March 2026 04:29:24 +0000 (0:00:00.789) 0:00:18.281 ******** 2026-03-26 04:29:29.342206 | orchestrator | ok: [testbed-manager] 2026-03-26 04:29:29.342221 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:29:29.342234 | orchestrator | ok: [testbed-node-1] 2026-03-26 04:29:29.342246 | orchestrator | ok: [testbed-node-2] 2026-03-26 04:29:29.342259 | orchestrator | ok: [testbed-node-3] 2026-03-26 04:29:29.342270 | orchestrator | ok: [testbed-node-4] 2026-03-26 04:29:29.342281 | orchestrator | ok: [testbed-node-5] 2026-03-26 04:29:29.342292 | orchestrator | 2026-03-26 04:29:29.342303 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-03-26 04:29:29.342314 | orchestrator | Thursday 26 March 2026 04:29:26 +0000 (0:00:01.923) 0:00:20.205 ******** 2026-03-26 04:29:29.342326 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-26 04:29:29.342339 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-26 04:29:29.342361 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-26 04:29:29.342373 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-26 04:29:29.342397 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-26 04:29:30.262924 | orchestrator | ok: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:29:30.263025 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-26 04:29:30.263040 | orchestrator | ok: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:29:30.263052 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-26 04:29:30.263086 | orchestrator | ok: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:29:30.263098 | orchestrator | ok: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:29:30.263130 | orchestrator | ok: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:29:30.263144 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:29:30.263157 | orchestrator | ok: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:29:30.263168 | orchestrator | ok: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:29:30.263188 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:29:30.263200 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:29:30.263211 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:29:30.263222 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:29:30.263241 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:29:30.263261 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:29:43.881555 | orchestrator | 2026-03-26 04:29:43.881695 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-03-26 04:29:43.881750 | orchestrator | Thursday 26 March 2026 04:29:30 +0000 (0:00:03.670) 0:00:23.875 ******** 2026-03-26 04:29:43.881847 | orchestrator | [WARNING]: Skipped 2026-03-26 04:29:43.881867 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-03-26 04:29:43.881884 | orchestrator | to this access issue: 2026-03-26 04:29:43.881902 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-03-26 04:29:43.881918 | orchestrator | directory 2026-03-26 04:29:43.881934 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-26 04:29:43.881951 | orchestrator | 2026-03-26 04:29:43.881968 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-03-26 04:29:43.881983 | orchestrator | Thursday 26 March 2026 04:29:31 +0000 (0:00:01.454) 0:00:25.330 ******** 2026-03-26 04:29:43.881999 | orchestrator | [WARNING]: Skipped 2026-03-26 04:29:43.882015 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-03-26 04:29:43.882130 | orchestrator | to this access issue: 2026-03-26 04:29:43.882145 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-03-26 04:29:43.882158 | orchestrator | directory 2026-03-26 04:29:43.882170 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-26 04:29:43.882184 | orchestrator | 2026-03-26 04:29:43.882197 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-03-26 04:29:43.882211 | orchestrator | Thursday 26 March 2026 04:29:32 +0000 (0:00:00.897) 0:00:26.228 ******** 2026-03-26 04:29:43.882224 | orchestrator | [WARNING]: Skipped 2026-03-26 04:29:43.882237 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-03-26 04:29:43.882251 | orchestrator | to this access issue: 2026-03-26 04:29:43.882267 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-03-26 04:29:43.882281 | orchestrator | directory 2026-03-26 04:29:43.882295 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-26 04:29:43.882309 | orchestrator | 2026-03-26 04:29:43.882322 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-03-26 04:29:43.882336 | orchestrator | Thursday 26 March 2026 04:29:33 +0000 (0:00:00.949) 0:00:27.177 ******** 2026-03-26 04:29:43.882350 | orchestrator | [WARNING]: Skipped 2026-03-26 04:29:43.882364 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-03-26 04:29:43.882377 | orchestrator | to this access issue: 2026-03-26 04:29:43.882390 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-03-26 04:29:43.882403 | orchestrator | directory 2026-03-26 04:29:43.882417 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-26 04:29:43.882430 | orchestrator | 2026-03-26 04:29:43.882443 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-03-26 04:29:43.882456 | orchestrator | Thursday 26 March 2026 04:29:34 +0000 (0:00:00.935) 0:00:28.112 ******** 2026-03-26 04:29:43.882470 | orchestrator | ok: [testbed-manager] 2026-03-26 04:29:43.882483 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:29:43.882496 | orchestrator | ok: [testbed-node-1] 2026-03-26 04:29:43.882508 | orchestrator | ok: [testbed-node-2] 2026-03-26 04:29:43.882521 | orchestrator | ok: [testbed-node-3] 2026-03-26 04:29:43.882534 | orchestrator | ok: [testbed-node-5] 2026-03-26 04:29:43.882547 | orchestrator | ok: [testbed-node-4] 2026-03-26 04:29:43.882560 | orchestrator | 2026-03-26 04:29:43.882573 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-03-26 04:29:43.882586 | orchestrator | Thursday 26 March 2026 04:29:37 +0000 (0:00:02.815) 0:00:30.927 ******** 2026-03-26 04:29:43.882599 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-26 04:29:43.882614 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-26 04:29:43.882628 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-26 04:29:43.882641 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-26 04:29:43.882654 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-26 04:29:43.882667 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-26 04:29:43.882675 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-26 04:29:43.882683 | orchestrator | 2026-03-26 04:29:43.882690 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-03-26 04:29:43.882698 | orchestrator | Thursday 26 March 2026 04:29:39 +0000 (0:00:02.467) 0:00:33.395 ******** 2026-03-26 04:29:43.882705 | orchestrator | ok: [testbed-manager] 2026-03-26 04:29:43.882713 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:29:43.882721 | orchestrator | ok: [testbed-node-1] 2026-03-26 04:29:43.882728 | orchestrator | ok: [testbed-node-2] 2026-03-26 04:29:43.882745 | orchestrator | ok: [testbed-node-3] 2026-03-26 04:29:43.882776 | orchestrator | ok: [testbed-node-4] 2026-03-26 04:29:43.882788 | orchestrator | ok: [testbed-node-5] 2026-03-26 04:29:43.882796 | orchestrator | 2026-03-26 04:29:43.882804 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-03-26 04:29:43.882811 | orchestrator | Thursday 26 March 2026 04:29:41 +0000 (0:00:01.984) 0:00:35.380 ******** 2026-03-26 04:29:43.882848 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-26 04:29:43.882861 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:29:43.882870 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-26 04:29:43.882878 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:29:43.882886 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:29:43.882896 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-26 04:29:43.882904 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:29:43.882919 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:29:43.882938 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-26 04:29:50.511360 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:29:50.511493 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-26 04:29:50.511510 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:29:50.511523 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:29:50.511537 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-26 04:29:50.511574 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:29:50.511602 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:29:50.511633 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-26 04:29:50.511646 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:29:50.511657 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:29:50.511668 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:29:50.511679 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:29:50.511699 | orchestrator | 2026-03-26 04:29:50.511712 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-03-26 04:29:50.511724 | orchestrator | Thursday 26 March 2026 04:29:43 +0000 (0:00:02.110) 0:00:37.490 ******** 2026-03-26 04:29:50.511734 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-26 04:29:50.511798 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-26 04:29:50.511809 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-26 04:29:50.511820 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-26 04:29:50.511830 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-26 04:29:50.511841 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-26 04:29:50.511851 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-26 04:29:50.511862 | orchestrator | 2026-03-26 04:29:50.511873 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-03-26 04:29:50.511883 | orchestrator | Thursday 26 March 2026 04:29:46 +0000 (0:00:02.170) 0:00:39.661 ******** 2026-03-26 04:29:50.511896 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-26 04:29:50.511909 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-26 04:29:50.511921 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-26 04:29:50.511933 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-26 04:29:50.511946 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-26 04:29:50.511963 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-26 04:29:50.511976 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-26 04:29:50.511987 | orchestrator | 2026-03-26 04:29:50.512000 | orchestrator | TASK [service-check-containers : common | Check containers] ******************** 2026-03-26 04:29:50.512012 | orchestrator | Thursday 26 March 2026 04:29:48 +0000 (0:00:02.193) 0:00:41.854 ******** 2026-03-26 04:29:50.512035 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-26 04:29:51.405036 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-26 04:29:51.405137 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-26 04:29:51.405173 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-26 04:29:51.405184 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-26 04:29:51.405194 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-26 04:29:51.405204 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-26 04:29:51.405228 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:29:51.405256 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:29:51.405267 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:29:51.405283 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:29:51.405294 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:29:51.405303 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:29:51.405318 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:29:51.405329 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:29:51.405348 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:29:53.080797 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:29:53.080898 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:29:53.080906 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:29:53.080912 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:29:53.080917 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:29:53.080922 | orchestrator | 2026-03-26 04:29:53.080928 | orchestrator | TASK [service-check-containers : common | Notify handlers to restart containers] *** 2026-03-26 04:29:53.080933 | orchestrator | Thursday 26 March 2026 04:29:51 +0000 (0:00:03.167) 0:00:45.021 ******** 2026-03-26 04:29:53.080939 | orchestrator | changed: [testbed-manager] => { 2026-03-26 04:29:53.080945 | orchestrator |  "msg": "Notifying handlers" 2026-03-26 04:29:53.080950 | orchestrator | } 2026-03-26 04:29:53.080954 | orchestrator | changed: [testbed-node-0] => { 2026-03-26 04:29:53.080959 | orchestrator |  "msg": "Notifying handlers" 2026-03-26 04:29:53.080963 | orchestrator | } 2026-03-26 04:29:53.080968 | orchestrator | changed: [testbed-node-1] => { 2026-03-26 04:29:53.080972 | orchestrator |  "msg": "Notifying handlers" 2026-03-26 04:29:53.080977 | orchestrator | } 2026-03-26 04:29:53.080981 | orchestrator | changed: [testbed-node-2] => { 2026-03-26 04:29:53.080986 | orchestrator |  "msg": "Notifying handlers" 2026-03-26 04:29:53.080990 | orchestrator | } 2026-03-26 04:29:53.080994 | orchestrator | changed: [testbed-node-3] => { 2026-03-26 04:29:53.080999 | orchestrator |  "msg": "Notifying handlers" 2026-03-26 04:29:53.081003 | orchestrator | } 2026-03-26 04:29:53.081008 | orchestrator | changed: [testbed-node-4] => { 2026-03-26 04:29:53.081013 | orchestrator |  "msg": "Notifying handlers" 2026-03-26 04:29:53.081017 | orchestrator | } 2026-03-26 04:29:53.081022 | orchestrator | changed: [testbed-node-5] => { 2026-03-26 04:29:53.081026 | orchestrator |  "msg": "Notifying handlers" 2026-03-26 04:29:53.081030 | orchestrator | } 2026-03-26 04:29:53.081035 | orchestrator | 2026-03-26 04:29:53.081040 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-26 04:29:53.081044 | orchestrator | Thursday 26 March 2026 04:29:52 +0000 (0:00:01.045) 0:00:46.067 ******** 2026-03-26 04:29:53.081051 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-26 04:29:53.081075 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:29:53.081081 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:29:53.081086 | orchestrator | skipping: [testbed-manager] 2026-03-26 04:29:53.081102 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-26 04:29:53.081108 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:29:53.081113 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:29:53.081118 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:29:53.081125 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-26 04:29:53.081130 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:29:53.081139 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:29:53.081147 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:29:55.617096 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-26 04:29:55.617212 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:29:55.617231 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:29:55.617247 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:29:55.617261 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-26 04:29:55.617273 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:29:55.617324 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:29:55.617337 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_handler_task_start) in callback 2026-03-26 04:29:55.617349 | orchestrator | plugin (): 'NoneType' object is not subscriptable 2026-03-26 04:29:55.617372 | orchestrator | skipping: [testbed-node-3] 2026-03-26 04:29:55.617431 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-26 04:29:55.617445 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:29:55.617456 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:29:55.617467 | orchestrator | skipping: [testbed-node-4] 2026-03-26 04:29:55.617479 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-26 04:29:55.617490 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:29:55.617501 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:29:55.617521 | orchestrator | skipping: [testbed-node-5] 2026-03-26 04:29:55.617532 | orchestrator | 2026-03-26 04:29:55.617549 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-26 04:29:55.617560 | orchestrator | Thursday 26 March 2026 04:29:54 +0000 (0:00:02.275) 0:00:48.342 ******** 2026-03-26 04:29:55.617571 | orchestrator | 2026-03-26 04:29:55.617582 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-26 04:29:55.617592 | orchestrator | Thursday 26 March 2026 04:29:54 +0000 (0:00:00.090) 0:00:48.433 ******** 2026-03-26 04:29:55.617603 | orchestrator | 2026-03-26 04:29:55.617614 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-26 04:29:55.617624 | orchestrator | Thursday 26 March 2026 04:29:54 +0000 (0:00:00.075) 0:00:48.508 ******** 2026-03-26 04:29:55.617635 | orchestrator | 2026-03-26 04:29:55.617645 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-26 04:29:55.617656 | orchestrator | Thursday 26 March 2026 04:29:54 +0000 (0:00:00.072) 0:00:48.581 ******** 2026-03-26 04:29:55.617666 | orchestrator | 2026-03-26 04:29:55.617677 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-26 04:29:55.617688 | orchestrator | Thursday 26 March 2026 04:29:55 +0000 (0:00:00.072) 0:00:48.653 ******** 2026-03-26 04:29:55.617698 | orchestrator | 2026-03-26 04:29:55.617709 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-26 04:29:55.617720 | orchestrator | Thursday 26 March 2026 04:29:55 +0000 (0:00:00.369) 0:00:49.022 ******** 2026-03-26 04:29:55.617760 | orchestrator | 2026-03-26 04:29:55.617771 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-26 04:29:55.617782 | orchestrator | Thursday 26 March 2026 04:29:55 +0000 (0:00:00.080) 0:00:49.103 ******** 2026-03-26 04:29:55.617792 | orchestrator | 2026-03-26 04:29:55.617803 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-03-26 04:29:55.617821 | orchestrator | Thursday 26 March 2026 04:29:55 +0000 (0:00:00.109) 0:00:49.212 ******** 2026-03-26 04:31:26.688303 | orchestrator | changed: [testbed-manager] 2026-03-26 04:31:26.688452 | orchestrator | changed: [testbed-node-1] 2026-03-26 04:31:26.688477 | orchestrator | changed: [testbed-node-5] 2026-03-26 04:31:26.688497 | orchestrator | changed: [testbed-node-0] 2026-03-26 04:31:26.688517 | orchestrator | changed: [testbed-node-3] 2026-03-26 04:31:26.688577 | orchestrator | changed: [testbed-node-4] 2026-03-26 04:31:26.688597 | orchestrator | changed: [testbed-node-2] 2026-03-26 04:31:26.688615 | orchestrator | 2026-03-26 04:31:26.688636 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-03-26 04:31:26.688656 | orchestrator | Thursday 26 March 2026 04:30:33 +0000 (0:00:37.548) 0:01:26.760 ******** 2026-03-26 04:31:26.688674 | orchestrator | changed: [testbed-manager] 2026-03-26 04:31:26.688693 | orchestrator | changed: [testbed-node-5] 2026-03-26 04:31:26.688712 | orchestrator | changed: [testbed-node-4] 2026-03-26 04:31:26.688730 | orchestrator | changed: [testbed-node-1] 2026-03-26 04:31:26.688747 | orchestrator | changed: [testbed-node-3] 2026-03-26 04:31:26.688766 | orchestrator | changed: [testbed-node-0] 2026-03-26 04:31:26.688783 | orchestrator | changed: [testbed-node-2] 2026-03-26 04:31:26.688802 | orchestrator | 2026-03-26 04:31:26.688822 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-03-26 04:31:26.688841 | orchestrator | Thursday 26 March 2026 04:31:12 +0000 (0:00:39.056) 0:02:05.817 ******** 2026-03-26 04:31:26.688861 | orchestrator | ok: [testbed-manager] 2026-03-26 04:31:26.688883 | orchestrator | ok: [testbed-node-1] 2026-03-26 04:31:26.688903 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:31:26.688923 | orchestrator | ok: [testbed-node-2] 2026-03-26 04:31:26.688943 | orchestrator | ok: [testbed-node-3] 2026-03-26 04:31:26.688996 | orchestrator | ok: [testbed-node-4] 2026-03-26 04:31:26.689018 | orchestrator | ok: [testbed-node-5] 2026-03-26 04:31:26.689036 | orchestrator | 2026-03-26 04:31:26.689055 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-03-26 04:31:26.689073 | orchestrator | Thursday 26 March 2026 04:31:14 +0000 (0:00:01.973) 0:02:07.790 ******** 2026-03-26 04:31:26.689096 | orchestrator | changed: [testbed-manager] 2026-03-26 04:31:26.689116 | orchestrator | changed: [testbed-node-3] 2026-03-26 04:31:26.689136 | orchestrator | changed: [testbed-node-4] 2026-03-26 04:31:26.689154 | orchestrator | changed: [testbed-node-0] 2026-03-26 04:31:26.689172 | orchestrator | changed: [testbed-node-5] 2026-03-26 04:31:26.689195 | orchestrator | changed: [testbed-node-1] 2026-03-26 04:31:26.689216 | orchestrator | changed: [testbed-node-2] 2026-03-26 04:31:26.689235 | orchestrator | 2026-03-26 04:31:26.689253 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-26 04:31:26.689276 | orchestrator | testbed-manager : ok=22  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-26 04:31:26.689296 | orchestrator | testbed-node-0 : ok=18  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-26 04:31:26.689313 | orchestrator | testbed-node-1 : ok=18  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-26 04:31:26.689331 | orchestrator | testbed-node-2 : ok=18  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-26 04:31:26.689349 | orchestrator | testbed-node-3 : ok=18  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-26 04:31:26.689368 | orchestrator | testbed-node-4 : ok=18  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-26 04:31:26.689386 | orchestrator | testbed-node-5 : ok=18  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-26 04:31:26.689404 | orchestrator | 2026-03-26 04:31:26.689422 | orchestrator | 2026-03-26 04:31:26.689440 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-26 04:31:26.689478 | orchestrator | Thursday 26 March 2026 04:31:26 +0000 (0:00:11.972) 0:02:19.762 ******** 2026-03-26 04:31:26.689496 | orchestrator | =============================================================================== 2026-03-26 04:31:26.689515 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 39.06s 2026-03-26 04:31:26.689559 | orchestrator | common : Restart fluentd container ------------------------------------- 37.55s 2026-03-26 04:31:26.689579 | orchestrator | common : Restart cron container ---------------------------------------- 11.97s 2026-03-26 04:31:26.689597 | orchestrator | common : Copying over config.json files for services -------------------- 3.67s 2026-03-26 04:31:26.689615 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 3.31s 2026-03-26 04:31:26.689633 | orchestrator | service-check-containers : common | Check containers -------------------- 3.17s 2026-03-26 04:31:26.689651 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 2.82s 2026-03-26 04:31:26.689670 | orchestrator | common : Copying over cron logrotate config file ------------------------ 2.47s 2026-03-26 04:31:26.689688 | orchestrator | common : include_tasks -------------------------------------------------- 2.46s 2026-03-26 04:31:26.689706 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.28s 2026-03-26 04:31:26.689725 | orchestrator | common : Ensuring config directories exist ------------------------------ 2.26s 2026-03-26 04:31:26.689743 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 2.26s 2026-03-26 04:31:26.689761 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.19s 2026-03-26 04:31:26.689795 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 2.17s 2026-03-26 04:31:26.689841 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.11s 2026-03-26 04:31:26.689860 | orchestrator | common : include_tasks -------------------------------------------------- 2.09s 2026-03-26 04:31:26.689878 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 1.99s 2026-03-26 04:31:26.689896 | orchestrator | common : Initializing toolbox container using normal user --------------- 1.97s 2026-03-26 04:31:26.689915 | orchestrator | common : Copying over kolla.target -------------------------------------- 1.92s 2026-03-26 04:31:26.689933 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 1.54s 2026-03-26 04:31:27.011643 | orchestrator | + osism apply -a upgrade loadbalancer 2026-03-26 04:31:29.079819 | orchestrator | 2026-03-26 04:31:29 | INFO  | Task 5147d542-5a4e-40bb-a5dc-01c2c9e8a857 (loadbalancer) was prepared for execution. 2026-03-26 04:31:29.079929 | orchestrator | 2026-03-26 04:31:29 | INFO  | It takes a moment until task 5147d542-5a4e-40bb-a5dc-01c2c9e8a857 (loadbalancer) has been started and output is visible here. 2026-03-26 04:32:07.137177 | orchestrator | 2026-03-26 04:32:07.137324 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-26 04:32:07.137346 | orchestrator | 2026-03-26 04:32:07.137359 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-26 04:32:07.137370 | orchestrator | Thursday 26 March 2026 04:31:36 +0000 (0:00:02.439) 0:00:02.439 ******** 2026-03-26 04:32:07.137381 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:32:07.137393 | orchestrator | ok: [testbed-node-1] 2026-03-26 04:32:07.137404 | orchestrator | ok: [testbed-node-2] 2026-03-26 04:32:07.137415 | orchestrator | 2026-03-26 04:32:07.137426 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-26 04:32:07.137437 | orchestrator | Thursday 26 March 2026 04:31:38 +0000 (0:00:02.336) 0:00:04.775 ******** 2026-03-26 04:32:07.137449 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-03-26 04:32:07.137513 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-03-26 04:32:07.137526 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-03-26 04:32:07.137537 | orchestrator | 2026-03-26 04:32:07.137548 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-03-26 04:32:07.137559 | orchestrator | 2026-03-26 04:32:07.137569 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-03-26 04:32:07.137580 | orchestrator | Thursday 26 March 2026 04:31:42 +0000 (0:00:03.757) 0:00:08.533 ******** 2026-03-26 04:32:07.137591 | orchestrator | included: /ansible/roles/loadbalancer/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 04:32:07.137602 | orchestrator | 2026-03-26 04:32:07.137613 | orchestrator | TASK [loadbalancer : Stop and remove containers for haproxy exporter containers] *** 2026-03-26 04:32:07.137623 | orchestrator | Thursday 26 March 2026 04:31:44 +0000 (0:00:02.088) 0:00:10.621 ******** 2026-03-26 04:32:07.137634 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:32:07.137645 | orchestrator | ok: [testbed-node-1] 2026-03-26 04:32:07.137655 | orchestrator | ok: [testbed-node-2] 2026-03-26 04:32:07.137668 | orchestrator | 2026-03-26 04:32:07.137680 | orchestrator | TASK [loadbalancer : Removing config for haproxy exporter] ********************* 2026-03-26 04:32:07.137693 | orchestrator | Thursday 26 March 2026 04:31:46 +0000 (0:00:02.060) 0:00:12.682 ******** 2026-03-26 04:32:07.137706 | orchestrator | ok: [testbed-node-2] 2026-03-26 04:32:07.137719 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:32:07.137731 | orchestrator | ok: [testbed-node-1] 2026-03-26 04:32:07.137743 | orchestrator | 2026-03-26 04:32:07.137755 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-03-26 04:32:07.137768 | orchestrator | Thursday 26 March 2026 04:31:48 +0000 (0:00:02.137) 0:00:14.820 ******** 2026-03-26 04:32:07.137780 | orchestrator | ok: [testbed-node-1] 2026-03-26 04:32:07.137792 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:32:07.137828 | orchestrator | ok: [testbed-node-2] 2026-03-26 04:32:07.137841 | orchestrator | 2026-03-26 04:32:07.137853 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-03-26 04:32:07.137866 | orchestrator | Thursday 26 March 2026 04:31:50 +0000 (0:00:01.839) 0:00:16.660 ******** 2026-03-26 04:32:07.137892 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 04:32:07.137906 | orchestrator | 2026-03-26 04:32:07.137918 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-03-26 04:32:07.137931 | orchestrator | Thursday 26 March 2026 04:31:52 +0000 (0:00:01.921) 0:00:18.581 ******** 2026-03-26 04:32:07.137943 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:32:07.137956 | orchestrator | ok: [testbed-node-1] 2026-03-26 04:32:07.137968 | orchestrator | ok: [testbed-node-2] 2026-03-26 04:32:07.137980 | orchestrator | 2026-03-26 04:32:07.137992 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-03-26 04:32:07.138004 | orchestrator | Thursday 26 March 2026 04:31:53 +0000 (0:00:01.697) 0:00:20.279 ******** 2026-03-26 04:32:07.138085 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-26 04:32:07.138109 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-26 04:32:07.138129 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-26 04:32:07.138149 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-26 04:32:07.138167 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-26 04:32:07.138189 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-26 04:32:07.138208 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-26 04:32:07.138230 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-26 04:32:07.138241 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-26 04:32:07.138251 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-26 04:32:07.138262 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-26 04:32:07.138273 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-26 04:32:07.138283 | orchestrator | 2026-03-26 04:32:07.138294 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-26 04:32:07.138304 | orchestrator | Thursday 26 March 2026 04:31:58 +0000 (0:00:04.044) 0:00:24.323 ******** 2026-03-26 04:32:07.138315 | orchestrator | ok: [testbed-node-0] => (item=ip_vs) 2026-03-26 04:32:07.138326 | orchestrator | ok: [testbed-node-1] => (item=ip_vs) 2026-03-26 04:32:07.138336 | orchestrator | ok: [testbed-node-2] => (item=ip_vs) 2026-03-26 04:32:07.138347 | orchestrator | 2026-03-26 04:32:07.138358 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-26 04:32:07.138388 | orchestrator | Thursday 26 March 2026 04:32:00 +0000 (0:00:02.183) 0:00:26.507 ******** 2026-03-26 04:32:07.138399 | orchestrator | ok: [testbed-node-0] => (item=ip_vs) 2026-03-26 04:32:07.138410 | orchestrator | ok: [testbed-node-1] => (item=ip_vs) 2026-03-26 04:32:07.138420 | orchestrator | ok: [testbed-node-2] => (item=ip_vs) 2026-03-26 04:32:07.138431 | orchestrator | 2026-03-26 04:32:07.138442 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-26 04:32:07.138452 | orchestrator | Thursday 26 March 2026 04:32:02 +0000 (0:00:02.350) 0:00:28.858 ******** 2026-03-26 04:32:07.138486 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-03-26 04:32:07.138497 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:32:07.138508 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-03-26 04:32:07.138518 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:32:07.138542 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-03-26 04:32:07.138553 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:32:07.138564 | orchestrator | 2026-03-26 04:32:07.138574 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-03-26 04:32:07.138585 | orchestrator | Thursday 26 March 2026 04:32:04 +0000 (0:00:01.899) 0:00:30.758 ******** 2026-03-26 04:32:07.138599 | orchestrator | ok: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-26 04:32:07.138624 | orchestrator | ok: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-26 04:32:07.138636 | orchestrator | ok: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-26 04:32:07.138647 | orchestrator | ok: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-26 04:32:07.138659 | orchestrator | ok: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-26 04:32:07.138678 | orchestrator | ok: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-26 04:32:18.220800 | orchestrator | ok: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-26 04:32:18.220923 | orchestrator | ok: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-26 04:32:18.220957 | orchestrator | ok: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-26 04:32:18.220972 | orchestrator | 2026-03-26 04:32:18.220985 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-03-26 04:32:18.220998 | orchestrator | Thursday 26 March 2026 04:32:07 +0000 (0:00:02.672) 0:00:33.430 ******** 2026-03-26 04:32:18.221009 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:32:18.221021 | orchestrator | ok: [testbed-node-1] 2026-03-26 04:32:18.221032 | orchestrator | ok: [testbed-node-2] 2026-03-26 04:32:18.221043 | orchestrator | 2026-03-26 04:32:18.221054 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-03-26 04:32:18.221065 | orchestrator | Thursday 26 March 2026 04:32:09 +0000 (0:00:01.982) 0:00:35.412 ******** 2026-03-26 04:32:18.221076 | orchestrator | ok: [testbed-node-0] => (item=users) 2026-03-26 04:32:18.221088 | orchestrator | ok: [testbed-node-1] => (item=users) 2026-03-26 04:32:18.221099 | orchestrator | ok: [testbed-node-2] => (item=users) 2026-03-26 04:32:18.221109 | orchestrator | ok: [testbed-node-0] => (item=rules) 2026-03-26 04:32:18.221120 | orchestrator | ok: [testbed-node-1] => (item=rules) 2026-03-26 04:32:18.221130 | orchestrator | ok: [testbed-node-2] => (item=rules) 2026-03-26 04:32:18.221141 | orchestrator | 2026-03-26 04:32:18.221152 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-03-26 04:32:18.221162 | orchestrator | Thursday 26 March 2026 04:32:11 +0000 (0:00:02.727) 0:00:38.141 ******** 2026-03-26 04:32:18.221173 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:32:18.221184 | orchestrator | ok: [testbed-node-1] 2026-03-26 04:32:18.221194 | orchestrator | ok: [testbed-node-2] 2026-03-26 04:32:18.221205 | orchestrator | 2026-03-26 04:32:18.221216 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-03-26 04:32:18.221226 | orchestrator | Thursday 26 March 2026 04:32:14 +0000 (0:00:02.315) 0:00:40.456 ******** 2026-03-26 04:32:18.221237 | orchestrator | ok: [testbed-node-1] 2026-03-26 04:32:18.221247 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:32:18.221258 | orchestrator | ok: [testbed-node-2] 2026-03-26 04:32:18.221289 | orchestrator | 2026-03-26 04:32:18.221300 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-03-26 04:32:18.221311 | orchestrator | Thursday 26 March 2026 04:32:16 +0000 (0:00:02.341) 0:00:42.798 ******** 2026-03-26 04:32:18.221323 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-26 04:32:18.221352 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-26 04:32:18.221365 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-26 04:32:18.221379 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__963cad7ce6778b422210f936b7a59b3fd90ba689', '__omit_place_holder__963cad7ce6778b422210f936b7a59b3fd90ba689'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-26 04:32:18.221396 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:32:18.221408 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-26 04:32:18.221420 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-26 04:32:18.221469 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-26 04:32:18.221482 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__963cad7ce6778b422210f936b7a59b3fd90ba689', '__omit_place_holder__963cad7ce6778b422210f936b7a59b3fd90ba689'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-26 04:32:18.221493 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:32:18.221512 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-26 04:32:23.292174 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-26 04:32:23.292288 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-26 04:32:23.292307 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__963cad7ce6778b422210f936b7a59b3fd90ba689', '__omit_place_holder__963cad7ce6778b422210f936b7a59b3fd90ba689'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-26 04:32:23.292340 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:32:23.292354 | orchestrator | 2026-03-26 04:32:23.292366 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-03-26 04:32:23.292378 | orchestrator | Thursday 26 March 2026 04:32:18 +0000 (0:00:01.712) 0:00:44.511 ******** 2026-03-26 04:32:23.292407 | orchestrator | ok: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-26 04:32:23.292419 | orchestrator | ok: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-26 04:32:23.292483 | orchestrator | ok: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-26 04:32:23.292517 | orchestrator | ok: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-26 04:32:23.292536 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-26 04:32:23.292548 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__963cad7ce6778b422210f936b7a59b3fd90ba689', '__omit_place_holder__963cad7ce6778b422210f936b7a59b3fd90ba689'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-26 04:32:23.292568 | orchestrator | ok: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-26 04:32:23.292580 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-26 04:32:23.292591 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__963cad7ce6778b422210f936b7a59b3fd90ba689', '__omit_place_holder__963cad7ce6778b422210f936b7a59b3fd90ba689'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-26 04:32:23.292610 | orchestrator | ok: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-26 04:32:37.093948 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-26 04:32:37.094131 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__963cad7ce6778b422210f936b7a59b3fd90ba689', '__omit_place_holder__963cad7ce6778b422210f936b7a59b3fd90ba689'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-26 04:32:37.094207 | orchestrator | 2026-03-26 04:32:37.094223 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-03-26 04:32:37.094235 | orchestrator | Thursday 26 March 2026 04:32:23 +0000 (0:00:05.084) 0:00:49.596 ******** 2026-03-26 04:32:37.094247 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-26 04:32:37.094260 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-26 04:32:37.094272 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-26 04:32:37.094283 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-26 04:32:37.094322 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-26 04:32:37.094335 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-26 04:32:37.094355 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-26 04:32:37.094366 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-26 04:32:37.094378 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-26 04:32:37.094389 | orchestrator | 2026-03-26 04:32:37.094400 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-03-26 04:32:37.094434 | orchestrator | Thursday 26 March 2026 04:32:28 +0000 (0:00:04.771) 0:00:54.367 ******** 2026-03-26 04:32:37.094446 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-26 04:32:37.094459 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-26 04:32:37.094472 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-26 04:32:37.094485 | orchestrator | 2026-03-26 04:32:37.094497 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-03-26 04:32:37.094510 | orchestrator | Thursday 26 March 2026 04:32:30 +0000 (0:00:02.726) 0:00:57.094 ******** 2026-03-26 04:32:37.094522 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-26 04:32:37.094534 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-26 04:32:37.094547 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-26 04:32:37.094559 | orchestrator | 2026-03-26 04:32:37.094572 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-03-26 04:32:37.094585 | orchestrator | Thursday 26 March 2026 04:32:35 +0000 (0:00:04.402) 0:01:01.496 ******** 2026-03-26 04:32:37.094598 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:32:37.094611 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:32:37.094631 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:32:57.478965 | orchestrator | 2026-03-26 04:32:57.479107 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-03-26 04:32:57.479124 | orchestrator | Thursday 26 March 2026 04:32:37 +0000 (0:00:01.896) 0:01:03.393 ******** 2026-03-26 04:32:57.479137 | orchestrator | ok: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-26 04:32:57.479148 | orchestrator | ok: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-26 04:32:57.479174 | orchestrator | ok: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-26 04:32:57.479186 | orchestrator | 2026-03-26 04:32:57.479197 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-03-26 04:32:57.479208 | orchestrator | Thursday 26 March 2026 04:32:40 +0000 (0:00:02.967) 0:01:06.361 ******** 2026-03-26 04:32:57.479218 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-26 04:32:57.479230 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-26 04:32:57.479240 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-26 04:32:57.479251 | orchestrator | 2026-03-26 04:32:57.479262 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-03-26 04:32:57.479272 | orchestrator | Thursday 26 March 2026 04:32:42 +0000 (0:00:02.798) 0:01:09.159 ******** 2026-03-26 04:32:57.479283 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 04:32:57.479293 | orchestrator | 2026-03-26 04:32:57.479304 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-03-26 04:32:57.479314 | orchestrator | Thursday 26 March 2026 04:32:44 +0000 (0:00:01.863) 0:01:11.023 ******** 2026-03-26 04:32:57.479326 | orchestrator | ok: [testbed-node-0] => (item=haproxy.pem) 2026-03-26 04:32:57.479337 | orchestrator | ok: [testbed-node-1] => (item=haproxy.pem) 2026-03-26 04:32:57.479348 | orchestrator | ok: [testbed-node-2] => (item=haproxy.pem) 2026-03-26 04:32:57.479358 | orchestrator | 2026-03-26 04:32:57.479392 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-03-26 04:32:57.479404 | orchestrator | Thursday 26 March 2026 04:32:47 +0000 (0:00:02.690) 0:01:13.713 ******** 2026-03-26 04:32:57.479415 | orchestrator | ok: [testbed-node-0] => (item=haproxy-internal.pem) 2026-03-26 04:32:57.479426 | orchestrator | ok: [testbed-node-1] => (item=haproxy-internal.pem) 2026-03-26 04:32:57.479438 | orchestrator | ok: [testbed-node-2] => (item=haproxy-internal.pem) 2026-03-26 04:32:57.479448 | orchestrator | 2026-03-26 04:32:57.479459 | orchestrator | TASK [loadbalancer : Copying over proxysql-cert.pem] *************************** 2026-03-26 04:32:57.479469 | orchestrator | Thursday 26 March 2026 04:32:49 +0000 (0:00:02.590) 0:01:16.303 ******** 2026-03-26 04:32:57.479480 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:32:57.479492 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:32:57.479505 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:32:57.479517 | orchestrator | 2026-03-26 04:32:57.479529 | orchestrator | TASK [loadbalancer : Copying over proxysql-key.pem] **************************** 2026-03-26 04:32:57.479542 | orchestrator | Thursday 26 March 2026 04:32:51 +0000 (0:00:01.409) 0:01:17.713 ******** 2026-03-26 04:32:57.479554 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:32:57.479566 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:32:57.479578 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:32:57.479588 | orchestrator | 2026-03-26 04:32:57.479599 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-03-26 04:32:57.479610 | orchestrator | Thursday 26 March 2026 04:32:53 +0000 (0:00:01.933) 0:01:19.647 ******** 2026-03-26 04:32:57.479624 | orchestrator | ok: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-26 04:32:57.479648 | orchestrator | ok: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-26 04:32:57.479683 | orchestrator | ok: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-26 04:32:57.479696 | orchestrator | ok: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-26 04:32:57.479708 | orchestrator | ok: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-26 04:32:57.479719 | orchestrator | ok: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-26 04:32:57.479732 | orchestrator | ok: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-26 04:32:57.479751 | orchestrator | ok: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-26 04:32:57.479770 | orchestrator | ok: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-26 04:33:01.210224 | orchestrator | 2026-03-26 04:33:01.210330 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-03-26 04:33:01.210346 | orchestrator | Thursday 26 March 2026 04:32:57 +0000 (0:00:04.123) 0:01:23.771 ******** 2026-03-26 04:33:01.210427 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-26 04:33:01.210445 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-26 04:33:01.210457 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-26 04:33:01.210469 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:33:01.210482 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-26 04:33:01.210516 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-26 04:33:01.210528 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-26 04:33:01.210539 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:33:01.210573 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-26 04:33:01.210586 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-26 04:33:01.210598 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-26 04:33:01.210609 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:33:01.210620 | orchestrator | 2026-03-26 04:33:01.210632 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-03-26 04:33:01.210643 | orchestrator | Thursday 26 March 2026 04:32:59 +0000 (0:00:01.634) 0:01:25.405 ******** 2026-03-26 04:33:01.210654 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-26 04:33:01.210673 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-26 04:33:01.210685 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-26 04:33:01.210696 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:33:01.210716 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-26 04:33:13.052436 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-26 04:33:13.052558 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-26 04:33:13.052574 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-26 04:33:13.052606 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-26 04:33:13.052620 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-26 04:33:13.052632 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:33:13.052646 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:33:13.052657 | orchestrator | 2026-03-26 04:33:13.052669 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-03-26 04:33:13.052681 | orchestrator | Thursday 26 March 2026 04:33:01 +0000 (0:00:02.100) 0:01:27.505 ******** 2026-03-26 04:33:13.052692 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-26 04:33:13.052704 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-26 04:33:13.052715 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-26 04:33:13.052726 | orchestrator | 2026-03-26 04:33:13.052736 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-03-26 04:33:13.052747 | orchestrator | Thursday 26 March 2026 04:33:03 +0000 (0:00:02.576) 0:01:30.082 ******** 2026-03-26 04:33:13.052758 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-26 04:33:13.052769 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-26 04:33:13.052780 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-26 04:33:13.052790 | orchestrator | 2026-03-26 04:33:13.052818 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-03-26 04:33:13.052837 | orchestrator | Thursday 26 March 2026 04:33:06 +0000 (0:00:02.495) 0:01:32.577 ******** 2026-03-26 04:33:13.052848 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-26 04:33:13.052859 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-26 04:33:13.052869 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-26 04:33:13.052880 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:33:13.052891 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-26 04:33:13.052901 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-26 04:33:13.052912 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:33:13.052933 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-26 04:33:13.052945 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:33:13.052959 | orchestrator | 2026-03-26 04:33:13.052970 | orchestrator | TASK [service-check-containers : loadbalancer | Check containers] ************** 2026-03-26 04:33:13.052982 | orchestrator | Thursday 26 March 2026 04:33:08 +0000 (0:00:02.658) 0:01:35.236 ******** 2026-03-26 04:33:13.052996 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-26 04:33:13.053011 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-26 04:33:13.053025 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-26 04:33:13.053038 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-26 04:33:13.053060 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-26 04:33:16.714396 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-26 04:33:16.714544 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-26 04:33:16.714561 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-26 04:33:16.714573 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-26 04:33:16.714585 | orchestrator | 2026-03-26 04:33:16.714599 | orchestrator | TASK [service-check-containers : loadbalancer | Notify handlers to restart containers] *** 2026-03-26 04:33:16.714612 | orchestrator | Thursday 26 March 2026 04:33:13 +0000 (0:00:04.113) 0:01:39.350 ******** 2026-03-26 04:33:16.714623 | orchestrator | changed: [testbed-node-0] => { 2026-03-26 04:33:16.714636 | orchestrator |  "msg": "Notifying handlers" 2026-03-26 04:33:16.714647 | orchestrator | } 2026-03-26 04:33:16.714658 | orchestrator | changed: [testbed-node-1] => { 2026-03-26 04:33:16.714668 | orchestrator |  "msg": "Notifying handlers" 2026-03-26 04:33:16.714679 | orchestrator | } 2026-03-26 04:33:16.714690 | orchestrator | changed: [testbed-node-2] => { 2026-03-26 04:33:16.714700 | orchestrator |  "msg": "Notifying handlers" 2026-03-26 04:33:16.714711 | orchestrator | } 2026-03-26 04:33:16.714722 | orchestrator | 2026-03-26 04:33:16.714733 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-26 04:33:16.714744 | orchestrator | Thursday 26 March 2026 04:33:14 +0000 (0:00:01.395) 0:01:40.745 ******** 2026-03-26 04:33:16.714755 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-26 04:33:16.714789 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-26 04:33:16.714810 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-26 04:33:16.714822 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:33:16.714833 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-26 04:33:16.714845 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-26 04:33:16.714856 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-26 04:33:16.714867 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:33:16.714881 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-26 04:33:16.714894 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-26 04:33:16.714927 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-26 04:33:22.355072 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:33:22.355218 | orchestrator | 2026-03-26 04:33:22.355248 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-03-26 04:33:22.355270 | orchestrator | Thursday 26 March 2026 04:33:16 +0000 (0:00:02.265) 0:01:43.010 ******** 2026-03-26 04:33:22.355290 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 04:33:22.355309 | orchestrator | 2026-03-26 04:33:22.355327 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-03-26 04:33:22.355410 | orchestrator | Thursday 26 March 2026 04:33:18 +0000 (0:00:02.022) 0:01:45.032 ******** 2026-03-26 04:33:22.355437 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-26 04:33:22.355462 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-26 04:33:22.355483 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-26 04:33:22.355502 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-26 04:33:22.355601 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-26 04:33:22.355628 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-26 04:33:22.355648 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-26 04:33:22.355668 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-26 04:33:22.355688 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-26 04:33:22.355719 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-26 04:33:22.355757 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-26 04:33:24.109930 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-26 04:33:24.110101 | orchestrator | 2026-03-26 04:33:24.110121 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-03-26 04:33:24.110134 | orchestrator | Thursday 26 March 2026 04:33:23 +0000 (0:00:04.748) 0:01:49.781 ******** 2026-03-26 04:33:24.110181 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-03-26 04:33:24.110200 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-26 04:33:24.110235 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-26 04:33:24.110262 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-26 04:33:24.110274 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:33:24.110305 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-03-26 04:33:24.110319 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-26 04:33:24.110395 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-26 04:33:24.110408 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-26 04:33:24.110428 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:33:24.110440 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-03-26 04:33:24.110458 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-26 04:33:24.110480 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-26 04:33:39.105381 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-26 04:33:39.105502 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:33:39.105522 | orchestrator | 2026-03-26 04:33:39.105535 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-03-26 04:33:39.105548 | orchestrator | Thursday 26 March 2026 04:33:25 +0000 (0:00:01.719) 0:01:51.501 ******** 2026-03-26 04:33:39.105561 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-03-26 04:33:39.105575 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-03-26 04:33:39.105611 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:33:39.105623 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-03-26 04:33:39.105634 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-03-26 04:33:39.105645 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:33:39.105657 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-03-26 04:33:39.105668 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-03-26 04:33:39.105678 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:33:39.105689 | orchestrator | 2026-03-26 04:33:39.105701 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-03-26 04:33:39.105712 | orchestrator | Thursday 26 March 2026 04:33:27 +0000 (0:00:02.425) 0:01:53.927 ******** 2026-03-26 04:33:39.105723 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:33:39.105735 | orchestrator | ok: [testbed-node-1] 2026-03-26 04:33:39.105745 | orchestrator | ok: [testbed-node-2] 2026-03-26 04:33:39.105756 | orchestrator | 2026-03-26 04:33:39.105767 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-03-26 04:33:39.105777 | orchestrator | Thursday 26 March 2026 04:33:29 +0000 (0:00:02.386) 0:01:56.313 ******** 2026-03-26 04:33:39.105788 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:33:39.105798 | orchestrator | ok: [testbed-node-1] 2026-03-26 04:33:39.105809 | orchestrator | ok: [testbed-node-2] 2026-03-26 04:33:39.105819 | orchestrator | 2026-03-26 04:33:39.105830 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-03-26 04:33:39.105856 | orchestrator | Thursday 26 March 2026 04:33:32 +0000 (0:00:02.961) 0:01:59.275 ******** 2026-03-26 04:33:39.105869 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 04:33:39.105881 | orchestrator | 2026-03-26 04:33:39.105893 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-03-26 04:33:39.105905 | orchestrator | Thursday 26 March 2026 04:33:34 +0000 (0:00:01.605) 0:02:00.881 ******** 2026-03-26 04:33:39.105940 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-26 04:33:39.105959 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-26 04:33:39.105981 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-26 04:33:39.105995 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-26 04:33:39.106014 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-26 04:33:39.106124 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-26 04:33:40.733107 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-26 04:33:40.733238 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-26 04:33:40.733255 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-26 04:33:40.733268 | orchestrator | 2026-03-26 04:33:40.733281 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-03-26 04:33:40.733293 | orchestrator | Thursday 26 March 2026 04:33:39 +0000 (0:00:04.523) 0:02:05.404 ******** 2026-03-26 04:33:40.733355 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-26 04:33:40.733371 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-26 04:33:40.733401 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-26 04:33:40.733423 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:33:40.733437 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-26 04:33:40.733449 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-26 04:33:40.733466 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-26 04:33:40.733477 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:33:40.733489 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-26 04:33:40.733517 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-26 04:33:56.975739 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-26 04:33:56.975860 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:33:56.975885 | orchestrator | 2026-03-26 04:33:56.975906 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-03-26 04:33:56.975926 | orchestrator | Thursday 26 March 2026 04:33:40 +0000 (0:00:01.626) 0:02:07.030 ******** 2026-03-26 04:33:56.975946 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-26 04:33:56.975970 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-26 04:33:56.975992 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:33:56.976011 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-26 04:33:56.976028 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-26 04:33:56.976040 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:33:56.976051 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-26 04:33:56.976063 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-26 04:33:56.976074 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:33:56.976084 | orchestrator | 2026-03-26 04:33:56.976095 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-03-26 04:33:56.976106 | orchestrator | Thursday 26 March 2026 04:33:42 +0000 (0:00:01.806) 0:02:08.837 ******** 2026-03-26 04:33:56.976139 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:33:56.976151 | orchestrator | ok: [testbed-node-1] 2026-03-26 04:33:56.976161 | orchestrator | ok: [testbed-node-2] 2026-03-26 04:33:56.976172 | orchestrator | 2026-03-26 04:33:56.976183 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-03-26 04:33:56.976194 | orchestrator | Thursday 26 March 2026 04:33:44 +0000 (0:00:02.341) 0:02:11.178 ******** 2026-03-26 04:33:56.976204 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:33:56.976215 | orchestrator | ok: [testbed-node-1] 2026-03-26 04:33:56.976225 | orchestrator | ok: [testbed-node-2] 2026-03-26 04:33:56.976236 | orchestrator | 2026-03-26 04:33:56.976246 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-03-26 04:33:56.976258 | orchestrator | Thursday 26 March 2026 04:33:47 +0000 (0:00:02.844) 0:02:14.023 ******** 2026-03-26 04:33:56.976270 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:33:56.976315 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:33:56.976328 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:33:56.976341 | orchestrator | 2026-03-26 04:33:56.976353 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-03-26 04:33:56.976365 | orchestrator | Thursday 26 March 2026 04:33:49 +0000 (0:00:01.352) 0:02:15.375 ******** 2026-03-26 04:33:56.976377 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 04:33:56.976389 | orchestrator | 2026-03-26 04:33:56.976401 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-03-26 04:33:56.976413 | orchestrator | Thursday 26 March 2026 04:33:50 +0000 (0:00:01.728) 0:02:17.104 ******** 2026-03-26 04:33:56.976464 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-03-26 04:33:56.976483 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-03-26 04:33:56.976496 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-03-26 04:33:56.976518 | orchestrator | 2026-03-26 04:33:56.976530 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-03-26 04:33:56.976549 | orchestrator | Thursday 26 March 2026 04:33:54 +0000 (0:00:03.538) 0:02:20.643 ******** 2026-03-26 04:33:56.976563 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-03-26 04:33:56.976575 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:33:56.976588 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-03-26 04:33:56.976601 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:33:56.976621 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-03-26 04:34:09.402371 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:34:09.402482 | orchestrator | 2026-03-26 04:34:09.402497 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-03-26 04:34:09.402509 | orchestrator | Thursday 26 March 2026 04:33:56 +0000 (0:00:02.633) 0:02:23.276 ******** 2026-03-26 04:34:09.402522 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-26 04:34:09.402535 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-26 04:34:09.402567 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:34:09.402591 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-26 04:34:09.402602 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-26 04:34:09.402612 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:34:09.402622 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-26 04:34:09.402633 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-03-26 04:34:09.402643 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:34:09.402652 | orchestrator | 2026-03-26 04:34:09.402662 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-03-26 04:34:09.402672 | orchestrator | Thursday 26 March 2026 04:33:59 +0000 (0:00:02.867) 0:02:26.143 ******** 2026-03-26 04:34:09.402682 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:34:09.402691 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:34:09.402701 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:34:09.402710 | orchestrator | 2026-03-26 04:34:09.402720 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-03-26 04:34:09.402729 | orchestrator | Thursday 26 March 2026 04:34:01 +0000 (0:00:01.591) 0:02:27.735 ******** 2026-03-26 04:34:09.402738 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:34:09.402748 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:34:09.402758 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:34:09.402767 | orchestrator | 2026-03-26 04:34:09.402776 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-03-26 04:34:09.402786 | orchestrator | Thursday 26 March 2026 04:34:03 +0000 (0:00:02.474) 0:02:30.209 ******** 2026-03-26 04:34:09.402795 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 04:34:09.402805 | orchestrator | 2026-03-26 04:34:09.402814 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-03-26 04:34:09.402824 | orchestrator | Thursday 26 March 2026 04:34:05 +0000 (0:00:01.805) 0:02:32.015 ******** 2026-03-26 04:34:09.402855 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-26 04:34:09.402891 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-26 04:34:09.402909 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-26 04:34:09.402922 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-26 04:34:09.402934 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-26 04:34:09.402954 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-26 04:34:11.400089 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-26 04:34:11.400220 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-26 04:34:11.400237 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-26 04:34:11.400251 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-26 04:34:11.400320 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-26 04:34:11.400387 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-26 04:34:11.400402 | orchestrator | 2026-03-26 04:34:11.400441 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-03-26 04:34:11.400455 | orchestrator | Thursday 26 March 2026 04:34:10 +0000 (0:00:04.835) 0:02:36.850 ******** 2026-03-26 04:34:11.400475 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-26 04:34:11.400488 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-26 04:34:11.400500 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-26 04:34:11.400511 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-26 04:34:11.400531 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:34:11.400554 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-26 04:34:22.559905 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-26 04:34:22.560024 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-26 04:34:22.560042 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-26 04:34:22.560055 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:34:22.560096 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-26 04:34:22.560111 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-26 04:34:22.560162 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-26 04:34:22.560177 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-26 04:34:22.560189 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:34:22.560200 | orchestrator | 2026-03-26 04:34:22.560212 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-03-26 04:34:22.560224 | orchestrator | Thursday 26 March 2026 04:34:12 +0000 (0:00:01.941) 0:02:38.792 ******** 2026-03-26 04:34:22.560236 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-26 04:34:22.560298 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-26 04:34:22.560321 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:34:22.560332 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-26 04:34:22.560344 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-26 04:34:22.560355 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:34:22.560366 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-26 04:34:22.560377 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-26 04:34:22.560387 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:34:22.560398 | orchestrator | 2026-03-26 04:34:22.560409 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-03-26 04:34:22.560421 | orchestrator | Thursday 26 March 2026 04:34:14 +0000 (0:00:02.014) 0:02:40.806 ******** 2026-03-26 04:34:22.560434 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:34:22.560447 | orchestrator | ok: [testbed-node-1] 2026-03-26 04:34:22.560459 | orchestrator | ok: [testbed-node-2] 2026-03-26 04:34:22.560471 | orchestrator | 2026-03-26 04:34:22.560483 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-03-26 04:34:22.560495 | orchestrator | Thursday 26 March 2026 04:34:16 +0000 (0:00:02.237) 0:02:43.043 ******** 2026-03-26 04:34:22.560508 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:34:22.560520 | orchestrator | ok: [testbed-node-1] 2026-03-26 04:34:22.560532 | orchestrator | ok: [testbed-node-2] 2026-03-26 04:34:22.560544 | orchestrator | 2026-03-26 04:34:22.560556 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-03-26 04:34:22.560568 | orchestrator | Thursday 26 March 2026 04:34:19 +0000 (0:00:02.849) 0:02:45.893 ******** 2026-03-26 04:34:22.560581 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:34:22.560593 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:34:22.560605 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:34:22.560617 | orchestrator | 2026-03-26 04:34:22.560629 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-03-26 04:34:22.560641 | orchestrator | Thursday 26 March 2026 04:34:21 +0000 (0:00:01.567) 0:02:47.460 ******** 2026-03-26 04:34:22.560654 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:34:22.560666 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:34:22.560686 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:34:27.917683 | orchestrator | 2026-03-26 04:34:27.917792 | orchestrator | TASK [include_role : designate] ************************************************ 2026-03-26 04:34:27.917810 | orchestrator | Thursday 26 March 2026 04:34:22 +0000 (0:00:01.398) 0:02:48.859 ******** 2026-03-26 04:34:27.917839 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 04:34:27.917851 | orchestrator | 2026-03-26 04:34:27.917864 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-03-26 04:34:27.917875 | orchestrator | Thursday 26 March 2026 04:34:24 +0000 (0:00:01.783) 0:02:50.643 ******** 2026-03-26 04:34:27.917893 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-26 04:34:27.917932 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-26 04:34:27.917947 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-26 04:34:27.917959 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-26 04:34:27.917971 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-26 04:34:27.918008 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-26 04:34:27.918090 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-26 04:34:27.918104 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-26 04:34:27.918117 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-26 04:34:27.918128 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-26 04:34:27.918139 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-26 04:34:27.918164 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-26 04:34:29.858456 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-26 04:34:29.858563 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-26 04:34:29.858583 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-26 04:34:29.858597 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-26 04:34:29.858608 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-26 04:34:29.858655 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-26 04:34:29.858690 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-26 04:34:29.858702 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-26 04:34:29.858714 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-26 04:34:29.858726 | orchestrator | 2026-03-26 04:34:29.858739 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-03-26 04:34:29.858751 | orchestrator | Thursday 26 March 2026 04:34:29 +0000 (0:00:04.864) 0:02:55.508 ******** 2026-03-26 04:34:29.858762 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-03-26 04:34:29.858780 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-26 04:34:29.858807 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-26 04:34:31.117334 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-26 04:34:31.117435 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-26 04:34:31.117452 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-03-26 04:34:31.117467 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-26 04:34:31.118014 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-26 04:34:31.118133 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-26 04:34:31.118142 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-26 04:34:31.118151 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:34:31.118160 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-26 04:34:31.118166 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-26 04:34:31.118174 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-26 04:34:31.118198 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-26 04:34:31.118209 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:34:31.118227 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-03-26 04:34:46.257700 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-26 04:34:46.257828 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-26 04:34:46.257856 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-26 04:34:46.257876 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-26 04:34:46.257941 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-26 04:34:46.257960 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-26 04:34:46.257972 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:34:46.257985 | orchestrator | 2026-03-26 04:34:46.257995 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-03-26 04:34:46.258006 | orchestrator | Thursday 26 March 2026 04:34:31 +0000 (0:00:01.914) 0:02:57.422 ******** 2026-03-26 04:34:46.258094 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-03-26 04:34:46.258109 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-03-26 04:34:46.258121 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:34:46.258130 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-03-26 04:34:46.258140 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-03-26 04:34:46.258150 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:34:46.258161 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-03-26 04:34:46.258171 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-03-26 04:34:46.258180 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:34:46.258190 | orchestrator | 2026-03-26 04:34:46.258199 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-03-26 04:34:46.258209 | orchestrator | Thursday 26 March 2026 04:34:33 +0000 (0:00:02.174) 0:02:59.597 ******** 2026-03-26 04:34:46.258263 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:34:46.258276 | orchestrator | ok: [testbed-node-1] 2026-03-26 04:34:46.258287 | orchestrator | ok: [testbed-node-2] 2026-03-26 04:34:46.258297 | orchestrator | 2026-03-26 04:34:46.258308 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-03-26 04:34:46.258319 | orchestrator | Thursday 26 March 2026 04:34:35 +0000 (0:00:02.304) 0:03:01.901 ******** 2026-03-26 04:34:46.258330 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:34:46.258340 | orchestrator | ok: [testbed-node-1] 2026-03-26 04:34:46.258350 | orchestrator | ok: [testbed-node-2] 2026-03-26 04:34:46.258361 | orchestrator | 2026-03-26 04:34:46.258372 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-03-26 04:34:46.258383 | orchestrator | Thursday 26 March 2026 04:34:38 +0000 (0:00:02.923) 0:03:04.825 ******** 2026-03-26 04:34:46.258394 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:34:46.258405 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:34:46.258416 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:34:46.258426 | orchestrator | 2026-03-26 04:34:46.258437 | orchestrator | TASK [include_role : glance] *************************************************** 2026-03-26 04:34:46.258449 | orchestrator | Thursday 26 March 2026 04:34:39 +0000 (0:00:01.332) 0:03:06.157 ******** 2026-03-26 04:34:46.258459 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 04:34:46.258470 | orchestrator | 2026-03-26 04:34:46.258481 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-03-26 04:34:46.258492 | orchestrator | Thursday 26 March 2026 04:34:41 +0000 (0:00:01.819) 0:03:07.977 ******** 2026-03-26 04:34:46.258521 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-26 04:34:47.378895 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-26 04:34:47.379047 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-26 04:34:47.379089 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-26 04:34:47.379116 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-26 04:34:47.379139 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-26 04:34:50.817528 | orchestrator | 2026-03-26 04:34:50.817622 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-03-26 04:34:50.817636 | orchestrator | Thursday 26 March 2026 04:34:47 +0000 (0:00:05.708) 0:03:13.686 ******** 2026-03-26 04:34:50.817668 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-26 04:34:50.817684 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-26 04:34:50.817716 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:34:50.817751 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-26 04:34:50.817765 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-26 04:34:50.817782 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:34:50.817805 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-26 04:35:09.422155 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-26 04:35:09.422276 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:35:09.422288 | orchestrator | 2026-03-26 04:35:09.422295 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-03-26 04:35:09.422303 | orchestrator | Thursday 26 March 2026 04:34:51 +0000 (0:00:04.576) 0:03:18.262 ******** 2026-03-26 04:35:09.422311 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-26 04:35:09.422318 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-26 04:35:09.422325 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:35:09.422332 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-26 04:35:09.422358 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-26 04:35:09.422365 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:35:09.422372 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-26 04:35:09.422379 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-26 04:35:09.422385 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:35:09.422397 | orchestrator | 2026-03-26 04:35:09.422404 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-03-26 04:35:09.422411 | orchestrator | Thursday 26 March 2026 04:34:56 +0000 (0:00:04.653) 0:03:22.916 ******** 2026-03-26 04:35:09.422417 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:35:09.422424 | orchestrator | ok: [testbed-node-1] 2026-03-26 04:35:09.422430 | orchestrator | ok: [testbed-node-2] 2026-03-26 04:35:09.422437 | orchestrator | 2026-03-26 04:35:09.422443 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-03-26 04:35:09.422450 | orchestrator | Thursday 26 March 2026 04:34:58 +0000 (0:00:02.295) 0:03:25.211 ******** 2026-03-26 04:35:09.422456 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:35:09.422462 | orchestrator | ok: [testbed-node-1] 2026-03-26 04:35:09.422469 | orchestrator | ok: [testbed-node-2] 2026-03-26 04:35:09.422475 | orchestrator | 2026-03-26 04:35:09.422482 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-03-26 04:35:09.422489 | orchestrator | Thursday 26 March 2026 04:35:01 +0000 (0:00:02.964) 0:03:28.175 ******** 2026-03-26 04:35:09.422495 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:35:09.422502 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:35:09.422509 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:35:09.422515 | orchestrator | 2026-03-26 04:35:09.422522 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-03-26 04:35:09.422528 | orchestrator | Thursday 26 March 2026 04:35:03 +0000 (0:00:01.641) 0:03:29.817 ******** 2026-03-26 04:35:09.422535 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 04:35:09.422541 | orchestrator | 2026-03-26 04:35:09.422548 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-03-26 04:35:09.422555 | orchestrator | Thursday 26 March 2026 04:35:05 +0000 (0:00:01.735) 0:03:31.553 ******** 2026-03-26 04:35:09.422563 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-26 04:35:09.422579 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-26 04:35:25.577813 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-26 04:35:25.577944 | orchestrator | 2026-03-26 04:35:25.577961 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-03-26 04:35:25.577973 | orchestrator | Thursday 26 March 2026 04:35:09 +0000 (0:00:04.170) 0:03:35.723 ******** 2026-03-26 04:35:25.577986 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-03-26 04:35:25.577998 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:35:25.578011 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-03-26 04:35:25.578098 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:35:25.578110 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-03-26 04:35:25.578121 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:35:25.578132 | orchestrator | 2026-03-26 04:35:25.578143 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-03-26 04:35:25.578211 | orchestrator | Thursday 26 March 2026 04:35:10 +0000 (0:00:01.512) 0:03:37.235 ******** 2026-03-26 04:35:25.578225 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-03-26 04:35:25.578247 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-03-26 04:35:25.578260 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:35:25.578296 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-03-26 04:35:25.578318 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-03-26 04:35:25.578329 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:35:25.578340 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-03-26 04:35:25.578351 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-03-26 04:35:25.578362 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:35:25.578373 | orchestrator | 2026-03-26 04:35:25.578384 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-03-26 04:35:25.578394 | orchestrator | Thursday 26 March 2026 04:35:12 +0000 (0:00:01.381) 0:03:38.616 ******** 2026-03-26 04:35:25.578405 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:35:25.578417 | orchestrator | ok: [testbed-node-1] 2026-03-26 04:35:25.578427 | orchestrator | ok: [testbed-node-2] 2026-03-26 04:35:25.578438 | orchestrator | 2026-03-26 04:35:25.578449 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-03-26 04:35:25.578460 | orchestrator | Thursday 26 March 2026 04:35:14 +0000 (0:00:02.275) 0:03:40.892 ******** 2026-03-26 04:35:25.578470 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:35:25.578481 | orchestrator | ok: [testbed-node-1] 2026-03-26 04:35:25.578491 | orchestrator | ok: [testbed-node-2] 2026-03-26 04:35:25.578502 | orchestrator | 2026-03-26 04:35:25.578513 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-03-26 04:35:25.578523 | orchestrator | Thursday 26 March 2026 04:35:17 +0000 (0:00:02.862) 0:03:43.755 ******** 2026-03-26 04:35:25.578534 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:35:25.578545 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:35:25.578556 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:35:25.578566 | orchestrator | 2026-03-26 04:35:25.578577 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-03-26 04:35:25.578588 | orchestrator | Thursday 26 March 2026 04:35:18 +0000 (0:00:01.425) 0:03:45.180 ******** 2026-03-26 04:35:25.578599 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 04:35:25.578609 | orchestrator | 2026-03-26 04:35:25.578620 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-03-26 04:35:25.578631 | orchestrator | Thursday 26 March 2026 04:35:20 +0000 (0:00:01.779) 0:03:46.960 ******** 2026-03-26 04:35:25.578660 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-26 04:35:27.270306 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-26 04:35:27.270445 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-26 04:35:27.270488 | orchestrator | 2026-03-26 04:35:27.270503 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-03-26 04:35:27.270515 | orchestrator | Thursday 26 March 2026 04:35:25 +0000 (0:00:04.915) 0:03:51.875 ******** 2026-03-26 04:35:27.270529 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-26 04:35:27.270549 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:35:27.270645 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-26 04:35:36.058561 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:35:36.058737 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-26 04:35:36.058820 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:35:36.058844 | orchestrator | 2026-03-26 04:35:36.058863 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-03-26 04:35:36.058884 | orchestrator | Thursday 26 March 2026 04:35:27 +0000 (0:00:01.701) 0:03:53.577 ******** 2026-03-26 04:35:36.058924 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-03-26 04:35:36.058949 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-26 04:35:36.058963 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-03-26 04:35:36.058977 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-26 04:35:36.058988 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-26 04:35:36.059001 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:35:36.059034 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-03-26 04:35:36.059046 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-26 04:35:36.059058 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-03-26 04:35:36.059072 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-26 04:35:36.059095 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-26 04:35:36.059108 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:35:36.059121 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-03-26 04:35:36.059134 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-26 04:35:36.059175 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-03-26 04:35:36.059196 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-26 04:35:36.059209 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-26 04:35:36.059222 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:35:36.059234 | orchestrator | 2026-03-26 04:35:36.059247 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-03-26 04:35:36.059260 | orchestrator | Thursday 26 March 2026 04:35:29 +0000 (0:00:02.029) 0:03:55.607 ******** 2026-03-26 04:35:36.059274 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:35:36.059295 | orchestrator | ok: [testbed-node-1] 2026-03-26 04:35:36.059313 | orchestrator | ok: [testbed-node-2] 2026-03-26 04:35:36.059331 | orchestrator | 2026-03-26 04:35:36.059349 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-03-26 04:35:36.059368 | orchestrator | Thursday 26 March 2026 04:35:31 +0000 (0:00:02.253) 0:03:57.860 ******** 2026-03-26 04:35:36.059385 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:35:36.059406 | orchestrator | ok: [testbed-node-1] 2026-03-26 04:35:36.059427 | orchestrator | ok: [testbed-node-2] 2026-03-26 04:35:36.059446 | orchestrator | 2026-03-26 04:35:36.059468 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-03-26 04:35:36.059486 | orchestrator | Thursday 26 March 2026 04:35:34 +0000 (0:00:02.893) 0:04:00.754 ******** 2026-03-26 04:35:36.059504 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:35:36.059523 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:35:36.059542 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:35:36.059561 | orchestrator | 2026-03-26 04:35:36.059572 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-03-26 04:35:36.059583 | orchestrator | Thursday 26 March 2026 04:35:35 +0000 (0:00:01.395) 0:04:02.150 ******** 2026-03-26 04:35:36.059604 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:35:46.147560 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:35:46.147677 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:35:46.147691 | orchestrator | 2026-03-26 04:35:46.147703 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-03-26 04:35:46.147714 | orchestrator | Thursday 26 March 2026 04:35:37 +0000 (0:00:01.364) 0:04:03.514 ******** 2026-03-26 04:35:46.147750 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 04:35:46.147767 | orchestrator | 2026-03-26 04:35:46.147784 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-03-26 04:35:46.147800 | orchestrator | Thursday 26 March 2026 04:35:39 +0000 (0:00:02.025) 0:04:05.540 ******** 2026-03-26 04:35:46.147823 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-03-26 04:35:46.147840 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-26 04:35:46.147866 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-26 04:35:46.147878 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-03-26 04:35:46.147906 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-26 04:35:46.147926 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-26 04:35:46.147937 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-03-26 04:35:46.147952 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-26 04:35:46.147963 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-26 04:35:46.147973 | orchestrator | 2026-03-26 04:35:46.147983 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-03-26 04:35:46.147993 | orchestrator | Thursday 26 March 2026 04:35:44 +0000 (0:00:04.854) 0:04:10.395 ******** 2026-03-26 04:35:46.148012 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-03-26 04:35:47.855641 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-26 04:35:47.855783 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-26 04:35:47.855814 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:35:47.855860 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-03-26 04:35:47.855887 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-26 04:35:47.855936 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-26 04:35:47.855956 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:35:47.856003 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-03-26 04:35:47.856026 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-26 04:35:47.856056 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-26 04:35:47.856069 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:35:47.856080 | orchestrator | 2026-03-26 04:35:47.856093 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-03-26 04:35:47.856105 | orchestrator | Thursday 26 March 2026 04:35:46 +0000 (0:00:02.048) 0:04:12.444 ******** 2026-03-26 04:35:47.856118 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-03-26 04:35:47.856159 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-03-26 04:35:47.856186 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:35:47.856198 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-03-26 04:35:47.856209 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-03-26 04:35:47.856219 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:35:47.856230 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-03-26 04:35:47.856242 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-03-26 04:35:47.856253 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:35:47.856264 | orchestrator | 2026-03-26 04:35:47.856275 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-03-26 04:35:47.856294 | orchestrator | Thursday 26 March 2026 04:35:47 +0000 (0:00:01.707) 0:04:14.151 ******** 2026-03-26 04:36:03.518783 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:36:03.518899 | orchestrator | ok: [testbed-node-1] 2026-03-26 04:36:03.518906 | orchestrator | ok: [testbed-node-2] 2026-03-26 04:36:03.518911 | orchestrator | 2026-03-26 04:36:03.518917 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-03-26 04:36:03.518924 | orchestrator | Thursday 26 March 2026 04:35:50 +0000 (0:00:02.283) 0:04:16.434 ******** 2026-03-26 04:36:03.518928 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:36:03.518933 | orchestrator | ok: [testbed-node-1] 2026-03-26 04:36:03.518937 | orchestrator | ok: [testbed-node-2] 2026-03-26 04:36:03.518941 | orchestrator | 2026-03-26 04:36:03.518945 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-03-26 04:36:03.518949 | orchestrator | Thursday 26 March 2026 04:35:53 +0000 (0:00:03.304) 0:04:19.739 ******** 2026-03-26 04:36:03.518954 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:36:03.518959 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:36:03.518963 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:36:03.518967 | orchestrator | 2026-03-26 04:36:03.518971 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-03-26 04:36:03.518976 | orchestrator | Thursday 26 March 2026 04:35:54 +0000 (0:00:01.383) 0:04:21.123 ******** 2026-03-26 04:36:03.518980 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 04:36:03.518985 | orchestrator | 2026-03-26 04:36:03.518989 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-03-26 04:36:03.518993 | orchestrator | Thursday 26 March 2026 04:35:56 +0000 (0:00:01.920) 0:04:23.044 ******** 2026-03-26 04:36:03.519018 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-26 04:36:03.519044 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-26 04:36:03.519050 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-26 04:36:03.519067 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-26 04:36:03.519072 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-26 04:36:03.519084 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-26 04:36:03.519089 | orchestrator | 2026-03-26 04:36:03.519094 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-03-26 04:36:03.519099 | orchestrator | Thursday 26 March 2026 04:36:01 +0000 (0:00:04.911) 0:04:27.955 ******** 2026-03-26 04:36:03.519104 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-03-26 04:36:03.519112 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-26 04:36:16.824862 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:36:16.825010 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-03-26 04:36:16.825084 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-26 04:36:16.825193 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:36:16.825218 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-03-26 04:36:16.825239 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-26 04:36:16.825251 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:36:16.825263 | orchestrator | 2026-03-26 04:36:16.825275 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-03-26 04:36:16.825287 | orchestrator | Thursday 26 March 2026 04:36:03 +0000 (0:00:01.862) 0:04:29.817 ******** 2026-03-26 04:36:16.825318 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-03-26 04:36:16.825333 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-03-26 04:36:16.825346 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:36:16.825357 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-03-26 04:36:16.825370 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-03-26 04:36:16.825394 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:36:16.825406 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-03-26 04:36:16.825419 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-03-26 04:36:16.825432 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:36:16.825444 | orchestrator | 2026-03-26 04:36:16.825457 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-03-26 04:36:16.825469 | orchestrator | Thursday 26 March 2026 04:36:05 +0000 (0:00:02.006) 0:04:31.823 ******** 2026-03-26 04:36:16.825481 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:36:16.825494 | orchestrator | ok: [testbed-node-1] 2026-03-26 04:36:16.825506 | orchestrator | ok: [testbed-node-2] 2026-03-26 04:36:16.825518 | orchestrator | 2026-03-26 04:36:16.825537 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-03-26 04:36:16.825551 | orchestrator | Thursday 26 March 2026 04:36:07 +0000 (0:00:02.371) 0:04:34.195 ******** 2026-03-26 04:36:16.825563 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:36:16.825575 | orchestrator | ok: [testbed-node-1] 2026-03-26 04:36:16.825588 | orchestrator | ok: [testbed-node-2] 2026-03-26 04:36:16.825600 | orchestrator | 2026-03-26 04:36:16.825653 | orchestrator | TASK [include_role : manila] *************************************************** 2026-03-26 04:36:16.825665 | orchestrator | Thursday 26 March 2026 04:36:10 +0000 (0:00:02.925) 0:04:37.121 ******** 2026-03-26 04:36:16.825688 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 04:36:16.825699 | orchestrator | 2026-03-26 04:36:16.825710 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-03-26 04:36:16.825721 | orchestrator | Thursday 26 March 2026 04:36:12 +0000 (0:00:02.165) 0:04:39.286 ******** 2026-03-26 04:36:16.825734 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-26 04:36:16.825748 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-26 04:36:16.825770 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-26 04:36:18.502669 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-26 04:36:18.502787 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-26 04:36:18.502804 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-26 04:36:18.502818 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-26 04:36:18.502829 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-26 04:36:18.502883 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-26 04:36:18.502897 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-26 04:36:18.502914 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-26 04:36:18.502926 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-26 04:36:18.502937 | orchestrator | 2026-03-26 04:36:18.502950 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-03-26 04:36:18.502962 | orchestrator | Thursday 26 March 2026 04:36:17 +0000 (0:00:04.923) 0:04:44.209 ******** 2026-03-26 04:36:18.502975 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-03-26 04:36:18.503002 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-26 04:36:21.627311 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-26 04:36:21.627422 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-26 04:36:21.627437 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:36:21.627450 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-03-26 04:36:21.627460 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-26 04:36:21.627490 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-26 04:36:21.627516 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-26 04:36:21.627526 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:36:21.627539 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-03-26 04:36:21.627549 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-26 04:36:21.627558 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-26 04:36:21.627567 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-26 04:36:21.627582 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:36:21.627592 | orchestrator | 2026-03-26 04:36:21.627602 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-03-26 04:36:21.627613 | orchestrator | Thursday 26 March 2026 04:36:19 +0000 (0:00:01.716) 0:04:45.926 ******** 2026-03-26 04:36:21.627622 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-03-26 04:36:21.627635 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-03-26 04:36:21.627645 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:36:21.627654 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-03-26 04:36:21.627669 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-03-26 04:36:36.968179 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:36:36.968298 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-03-26 04:36:36.968319 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-03-26 04:36:36.968334 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:36:36.968345 | orchestrator | 2026-03-26 04:36:36.968357 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-03-26 04:36:36.968369 | orchestrator | Thursday 26 March 2026 04:36:21 +0000 (0:00:02.003) 0:04:47.929 ******** 2026-03-26 04:36:36.968380 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:36:36.968391 | orchestrator | ok: [testbed-node-1] 2026-03-26 04:36:36.968402 | orchestrator | ok: [testbed-node-2] 2026-03-26 04:36:36.968412 | orchestrator | 2026-03-26 04:36:36.968439 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-03-26 04:36:36.968450 | orchestrator | Thursday 26 March 2026 04:36:23 +0000 (0:00:02.254) 0:04:50.183 ******** 2026-03-26 04:36:36.968461 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:36:36.968472 | orchestrator | ok: [testbed-node-1] 2026-03-26 04:36:36.968482 | orchestrator | ok: [testbed-node-2] 2026-03-26 04:36:36.968492 | orchestrator | 2026-03-26 04:36:36.968503 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-03-26 04:36:36.968514 | orchestrator | Thursday 26 March 2026 04:36:26 +0000 (0:00:02.974) 0:04:53.158 ******** 2026-03-26 04:36:36.968525 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 04:36:36.968535 | orchestrator | 2026-03-26 04:36:36.968546 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-03-26 04:36:36.968557 | orchestrator | Thursday 26 March 2026 04:36:29 +0000 (0:00:02.622) 0:04:55.780 ******** 2026-03-26 04:36:36.968567 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2026-03-26 04:36:36.968578 | orchestrator | 2026-03-26 04:36:36.968610 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-03-26 04:36:36.968622 | orchestrator | Thursday 26 March 2026 04:36:33 +0000 (0:00:03.966) 0:04:59.747 ******** 2026-03-26 04:36:36.968638 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-26 04:36:36.968672 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-26 04:36:36.968685 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:36:36.968728 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-26 04:36:36.968750 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-26 04:36:36.968762 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:36:36.968783 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-26 04:36:40.696601 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-26 04:36:40.696713 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:36:40.696729 | orchestrator | 2026-03-26 04:36:40.696743 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-03-26 04:36:40.696754 | orchestrator | Thursday 26 March 2026 04:36:36 +0000 (0:00:03.513) 0:05:03.260 ******** 2026-03-26 04:36:40.696792 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-26 04:36:40.696807 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-26 04:36:40.696819 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:36:40.696858 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-26 04:36:40.696880 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-26 04:36:40.696892 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:36:40.696904 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-26 04:36:40.696924 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-26 04:36:56.947759 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:36:56.947905 | orchestrator | 2026-03-26 04:36:56.947929 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-03-26 04:36:56.947949 | orchestrator | Thursday 26 March 2026 04:36:40 +0000 (0:00:03.738) 0:05:06.999 ******** 2026-03-26 04:36:56.947991 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-26 04:36:56.948046 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-26 04:36:56.948113 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:36:56.948134 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-26 04:36:56.948153 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-26 04:36:56.948170 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:36:56.948188 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-26 04:36:56.948206 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-26 04:36:56.948225 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:36:56.948244 | orchestrator | 2026-03-26 04:36:56.948262 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-03-26 04:36:56.948280 | orchestrator | Thursday 26 March 2026 04:36:44 +0000 (0:00:03.918) 0:05:10.918 ******** 2026-03-26 04:36:56.948311 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:36:56.948352 | orchestrator | ok: [testbed-node-1] 2026-03-26 04:36:56.948374 | orchestrator | ok: [testbed-node-2] 2026-03-26 04:36:56.948393 | orchestrator | 2026-03-26 04:36:56.948413 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-03-26 04:36:56.948431 | orchestrator | Thursday 26 March 2026 04:36:47 +0000 (0:00:02.974) 0:05:13.893 ******** 2026-03-26 04:36:56.948450 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:36:56.948470 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:36:56.948491 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:36:56.948509 | orchestrator | 2026-03-26 04:36:56.948537 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-03-26 04:36:56.948557 | orchestrator | Thursday 26 March 2026 04:36:50 +0000 (0:00:02.727) 0:05:16.621 ******** 2026-03-26 04:36:56.948576 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:36:56.948597 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:36:56.948615 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:36:56.948631 | orchestrator | 2026-03-26 04:36:56.948648 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-03-26 04:36:56.948668 | orchestrator | Thursday 26 March 2026 04:36:51 +0000 (0:00:01.402) 0:05:18.023 ******** 2026-03-26 04:36:56.948685 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 04:36:56.948702 | orchestrator | 2026-03-26 04:36:56.948719 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-03-26 04:36:56.948736 | orchestrator | Thursday 26 March 2026 04:36:53 +0000 (0:00:02.176) 0:05:20.199 ******** 2026-03-26 04:36:56.948756 | orchestrator | ok: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-26 04:36:56.948776 | orchestrator | ok: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-26 04:36:56.948795 | orchestrator | ok: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-26 04:36:56.948823 | orchestrator | 2026-03-26 04:36:56.948840 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-03-26 04:36:56.948858 | orchestrator | Thursday 26 March 2026 04:36:56 +0000 (0:00:02.535) 0:05:22.735 ******** 2026-03-26 04:36:56.948888 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-26 04:37:11.845681 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-26 04:37:11.845801 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:37:11.845819 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:37:11.845833 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-26 04:37:11.845845 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:37:11.845856 | orchestrator | 2026-03-26 04:37:11.845868 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-03-26 04:37:11.845880 | orchestrator | Thursday 26 March 2026 04:36:58 +0000 (0:00:01.745) 0:05:24.480 ******** 2026-03-26 04:37:11.845892 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-26 04:37:11.845905 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-26 04:37:11.845916 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:37:11.845927 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:37:11.845937 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-26 04:37:11.845970 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:37:11.845981 | orchestrator | 2026-03-26 04:37:11.845992 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-03-26 04:37:11.846003 | orchestrator | Thursday 26 March 2026 04:36:59 +0000 (0:00:01.420) 0:05:25.901 ******** 2026-03-26 04:37:11.846014 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:37:11.846100 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:37:11.846111 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:37:11.846121 | orchestrator | 2026-03-26 04:37:11.846132 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-03-26 04:37:11.846152 | orchestrator | Thursday 26 March 2026 04:37:01 +0000 (0:00:01.454) 0:05:27.355 ******** 2026-03-26 04:37:11.846163 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:37:11.846173 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:37:11.846184 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:37:11.846194 | orchestrator | 2026-03-26 04:37:11.846205 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-03-26 04:37:11.846216 | orchestrator | Thursday 26 March 2026 04:37:03 +0000 (0:00:02.362) 0:05:29.717 ******** 2026-03-26 04:37:11.846226 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:37:11.846237 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:37:11.846247 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:37:11.846258 | orchestrator | 2026-03-26 04:37:11.846269 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-03-26 04:37:11.846279 | orchestrator | Thursday 26 March 2026 04:37:05 +0000 (0:00:01.707) 0:05:31.425 ******** 2026-03-26 04:37:11.846290 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 04:37:11.846301 | orchestrator | 2026-03-26 04:37:11.846312 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-03-26 04:37:11.846323 | orchestrator | Thursday 26 March 2026 04:37:07 +0000 (0:00:02.122) 0:05:33.547 ******** 2026-03-26 04:37:11.846363 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-26 04:37:11.846380 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-26 04:37:11.846394 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-03-26 04:37:11.846416 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-03-26 04:37:11.846443 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-26 04:37:11.931999 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-26 04:37:11.932110 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-26 04:37:11.932149 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-26 04:37:11.932162 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-26 04:37:11.932175 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-26 04:37:11.932218 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-26 04:37:11.932232 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-03-26 04:37:11.932252 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-03-26 04:37:11.932265 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-26 04:37:11.932277 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-03-26 04:37:11.932301 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-26 04:37:12.287345 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-26 04:37:12.287444 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-26 04:37:12.287485 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-26 04:37:12.287499 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-26 04:37:12.287513 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-26 04:37:12.287542 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-26 04:37:12.287572 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-26 04:37:12.287586 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-26 04:37:12.287606 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-26 04:37:12.287617 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-03-26 04:37:12.287630 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-26 04:37:12.287646 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-26 04:37:12.287665 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-26 04:37:13.615037 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-26 04:37:13.615203 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-26 04:37:13.615223 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-26 04:37:13.615260 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-03-26 04:37:13.615293 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-03-26 04:37:13.615328 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-26 04:37:13.615341 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-26 04:37:13.615354 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-26 04:37:13.615365 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-26 04:37:13.615382 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-26 04:37:13.615407 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-26 04:37:14.700729 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-03-26 04:37:14.700836 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-26 04:37:14.700853 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-26 04:37:14.700870 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-26 04:37:14.700902 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-26 04:37:14.700935 | orchestrator | 2026-03-26 04:37:14.700949 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-03-26 04:37:14.700961 | orchestrator | Thursday 26 March 2026 04:37:13 +0000 (0:00:06.368) 0:05:39.916 ******** 2026-03-26 04:37:14.700993 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-03-26 04:37:14.701007 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-26 04:37:14.701020 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-03-26 04:37:14.701037 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-03-26 04:37:14.701108 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-26 04:37:14.812726 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-26 04:37:14.812798 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-26 04:37:14.812806 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-26 04:37:14.812813 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-26 04:37:14.812831 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-26 04:37:14.812863 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-03-26 04:37:14.812870 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-03-26 04:37:14.812875 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-26 04:37:14.812880 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-26 04:37:14.812888 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-03-26 04:37:14.812896 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-26 04:37:14.812905 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-03-26 04:37:14.969176 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-26 04:37:14.969269 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-26 04:37:14.969284 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-26 04:37:14.969369 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:37:14.969394 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-26 04:37:14.969413 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-26 04:37:14.969433 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-26 04:37:14.969470 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-26 04:37:14.969491 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-26 04:37:14.969510 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-03-26 04:37:14.969545 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-26 04:37:14.969555 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-26 04:37:14.969573 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-26 04:37:16.263972 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-26 04:37:16.264128 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:37:16.264152 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-03-26 04:37:16.264205 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-26 04:37:16.264220 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-03-26 04:37:16.264253 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-03-26 04:37:16.264266 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-26 04:37:16.264278 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-26 04:37:16.264305 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-26 04:37:16.264325 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-03-26 04:37:16.264346 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-26 04:37:16.264376 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-26 04:37:31.636177 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-03-26 04:37:31.636278 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-03-26 04:37:31.636324 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-26 04:37:31.636337 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-26 04:37:31.636347 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-26 04:37:31.636355 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:37:31.636365 | orchestrator | 2026-03-26 04:37:31.636373 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-03-26 04:37:31.636381 | orchestrator | Thursday 26 March 2026 04:37:16 +0000 (0:00:02.651) 0:05:42.568 ******** 2026-03-26 04:37:31.636390 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-03-26 04:37:31.636413 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-03-26 04:37:31.636423 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:37:31.636430 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-03-26 04:37:31.636438 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-03-26 04:37:31.636454 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:37:31.636461 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-03-26 04:37:31.636469 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-03-26 04:37:31.636476 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:37:31.636483 | orchestrator | 2026-03-26 04:37:31.636491 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-03-26 04:37:31.636498 | orchestrator | Thursday 26 March 2026 04:37:19 +0000 (0:00:02.989) 0:05:45.557 ******** 2026-03-26 04:37:31.636505 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:37:31.636513 | orchestrator | ok: [testbed-node-1] 2026-03-26 04:37:31.636520 | orchestrator | ok: [testbed-node-2] 2026-03-26 04:37:31.636527 | orchestrator | 2026-03-26 04:37:31.636535 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-03-26 04:37:31.636542 | orchestrator | Thursday 26 March 2026 04:37:21 +0000 (0:00:02.255) 0:05:47.813 ******** 2026-03-26 04:37:31.636549 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:37:31.636556 | orchestrator | ok: [testbed-node-1] 2026-03-26 04:37:31.636563 | orchestrator | ok: [testbed-node-2] 2026-03-26 04:37:31.636570 | orchestrator | 2026-03-26 04:37:31.636581 | orchestrator | TASK [include_role : placement] ************************************************ 2026-03-26 04:37:31.636588 | orchestrator | Thursday 26 March 2026 04:37:24 +0000 (0:00:02.986) 0:05:50.800 ******** 2026-03-26 04:37:31.636595 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 04:37:31.636602 | orchestrator | 2026-03-26 04:37:31.636609 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-03-26 04:37:31.636616 | orchestrator | Thursday 26 March 2026 04:37:26 +0000 (0:00:02.470) 0:05:53.271 ******** 2026-03-26 04:37:31.636624 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-03-26 04:37:31.636639 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-03-26 04:37:48.679893 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-03-26 04:37:48.680007 | orchestrator | 2026-03-26 04:37:48.680069 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-03-26 04:37:48.680082 | orchestrator | Thursday 26 March 2026 04:37:31 +0000 (0:00:04.663) 0:05:57.934 ******** 2026-03-26 04:37:48.680111 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-03-26 04:37:48.680124 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:37:48.680138 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-03-26 04:37:48.680150 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:37:48.680203 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-03-26 04:37:48.680216 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:37:48.680227 | orchestrator | 2026-03-26 04:37:48.680239 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-03-26 04:37:48.680249 | orchestrator | Thursday 26 March 2026 04:37:33 +0000 (0:00:01.726) 0:05:59.660 ******** 2026-03-26 04:37:48.680262 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-03-26 04:37:48.680276 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-03-26 04:37:48.680289 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:37:48.680300 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-03-26 04:37:48.680317 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-03-26 04:37:48.680328 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:37:48.680339 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-03-26 04:37:48.680350 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-03-26 04:37:48.680362 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:37:48.680372 | orchestrator | 2026-03-26 04:37:48.680383 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-03-26 04:37:48.680394 | orchestrator | Thursday 26 March 2026 04:37:35 +0000 (0:00:01.907) 0:06:01.568 ******** 2026-03-26 04:37:48.680405 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:37:48.680417 | orchestrator | ok: [testbed-node-1] 2026-03-26 04:37:48.680430 | orchestrator | ok: [testbed-node-2] 2026-03-26 04:37:48.680442 | orchestrator | 2026-03-26 04:37:48.680454 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-03-26 04:37:48.680466 | orchestrator | Thursday 26 March 2026 04:37:37 +0000 (0:00:02.377) 0:06:03.945 ******** 2026-03-26 04:37:48.680486 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:37:48.680498 | orchestrator | ok: [testbed-node-1] 2026-03-26 04:37:48.680511 | orchestrator | ok: [testbed-node-2] 2026-03-26 04:37:48.680522 | orchestrator | 2026-03-26 04:37:48.680534 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-03-26 04:37:48.680547 | orchestrator | Thursday 26 March 2026 04:37:40 +0000 (0:00:02.975) 0:06:06.921 ******** 2026-03-26 04:37:48.680559 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 04:37:48.680571 | orchestrator | 2026-03-26 04:37:48.680583 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-03-26 04:37:48.680595 | orchestrator | Thursday 26 March 2026 04:37:42 +0000 (0:00:02.309) 0:06:09.231 ******** 2026-03-26 04:37:48.680616 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-26 04:37:50.195479 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-26 04:37:50.195603 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-26 04:37:50.195642 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-26 04:37:50.195656 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-26 04:37:50.195687 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-26 04:37:50.195707 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-26 04:37:50.195720 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-03-26 04:37:50.195739 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-26 04:37:50.195751 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-26 04:37:50.195769 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-26 04:37:50.888944 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-26 04:37:50.889046 | orchestrator | 2026-03-26 04:37:50.889056 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-03-26 04:37:50.889064 | orchestrator | Thursday 26 March 2026 04:37:50 +0000 (0:00:07.268) 0:06:16.500 ******** 2026-03-26 04:37:50.889091 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-26 04:37:50.889112 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-26 04:37:50.889117 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-26 04:37:50.889133 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-26 04:37:50.889138 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:37:50.889146 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-26 04:37:50.889150 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-26 04:37:50.889159 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-26 04:37:50.889163 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-26 04:37:50.889167 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:37:50.889175 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-26 04:38:09.527436 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-03-26 04:38:09.527610 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-26 04:38:09.527647 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-26 04:38:09.527671 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:38:09.527687 | orchestrator | 2026-03-26 04:38:09.527699 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-03-26 04:38:09.527712 | orchestrator | Thursday 26 March 2026 04:37:52 +0000 (0:00:01.951) 0:06:18.451 ******** 2026-03-26 04:38:09.527723 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-26 04:38:09.527738 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-26 04:38:09.527750 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-26 04:38:09.527762 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-26 04:38:09.527773 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:38:09.527785 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-26 04:38:09.527815 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-26 04:38:09.527827 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-26 04:38:09.527856 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-26 04:38:09.527867 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:38:09.527878 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-26 04:38:09.527890 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-26 04:38:09.527900 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-26 04:38:09.527911 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-03-26 04:38:09.527922 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:38:09.527933 | orchestrator | 2026-03-26 04:38:09.527944 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-03-26 04:38:09.527977 | orchestrator | Thursday 26 March 2026 04:37:54 +0000 (0:00:02.577) 0:06:21.029 ******** 2026-03-26 04:38:09.528026 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:38:09.528041 | orchestrator | ok: [testbed-node-1] 2026-03-26 04:38:09.528053 | orchestrator | ok: [testbed-node-2] 2026-03-26 04:38:09.528066 | orchestrator | 2026-03-26 04:38:09.528079 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-03-26 04:38:09.528091 | orchestrator | Thursday 26 March 2026 04:37:57 +0000 (0:00:02.333) 0:06:23.362 ******** 2026-03-26 04:38:09.528104 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:38:09.528116 | orchestrator | ok: [testbed-node-1] 2026-03-26 04:38:09.528128 | orchestrator | ok: [testbed-node-2] 2026-03-26 04:38:09.528141 | orchestrator | 2026-03-26 04:38:09.528154 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-03-26 04:38:09.528166 | orchestrator | Thursday 26 March 2026 04:38:00 +0000 (0:00:03.004) 0:06:26.367 ******** 2026-03-26 04:38:09.528178 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 04:38:09.528191 | orchestrator | 2026-03-26 04:38:09.528203 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-03-26 04:38:09.528215 | orchestrator | Thursday 26 March 2026 04:38:02 +0000 (0:00:02.853) 0:06:29.220 ******** 2026-03-26 04:38:09.528228 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-03-26 04:38:09.528242 | orchestrator | 2026-03-26 04:38:09.528253 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-03-26 04:38:09.528267 | orchestrator | Thursday 26 March 2026 04:38:04 +0000 (0:00:01.765) 0:06:30.986 ******** 2026-03-26 04:38:09.528281 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-26 04:38:09.528303 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-26 04:38:09.528324 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-26 04:38:29.034294 | orchestrator | 2026-03-26 04:38:29.034431 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-03-26 04:38:29.034447 | orchestrator | Thursday 26 March 2026 04:38:09 +0000 (0:00:04.834) 0:06:35.820 ******** 2026-03-26 04:38:29.034461 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-26 04:38:29.034474 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:38:29.034486 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-26 04:38:29.034496 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:38:29.034506 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-26 04:38:29.034516 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:38:29.034526 | orchestrator | 2026-03-26 04:38:29.034536 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-03-26 04:38:29.034545 | orchestrator | Thursday 26 March 2026 04:38:11 +0000 (0:00:02.439) 0:06:38.260 ******** 2026-03-26 04:38:29.034556 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-26 04:38:29.034569 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-26 04:38:29.034605 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:38:29.034674 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-26 04:38:29.034686 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-26 04:38:29.034695 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-26 04:38:29.034705 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:38:29.034715 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-26 04:38:29.034725 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:38:29.034734 | orchestrator | 2026-03-26 04:38:29.034744 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-26 04:38:29.034753 | orchestrator | Thursday 26 March 2026 04:38:15 +0000 (0:00:03.286) 0:06:41.547 ******** 2026-03-26 04:38:29.034763 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:38:29.034773 | orchestrator | ok: [testbed-node-1] 2026-03-26 04:38:29.034783 | orchestrator | ok: [testbed-node-2] 2026-03-26 04:38:29.034792 | orchestrator | 2026-03-26 04:38:29.034802 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-26 04:38:29.034811 | orchestrator | Thursday 26 March 2026 04:38:19 +0000 (0:00:03.873) 0:06:45.420 ******** 2026-03-26 04:38:29.034821 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:38:29.034830 | orchestrator | ok: [testbed-node-2] 2026-03-26 04:38:29.034859 | orchestrator | ok: [testbed-node-1] 2026-03-26 04:38:29.034870 | orchestrator | 2026-03-26 04:38:29.034885 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-03-26 04:38:29.034895 | orchestrator | Thursday 26 March 2026 04:38:22 +0000 (0:00:03.493) 0:06:48.914 ******** 2026-03-26 04:38:29.034906 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-03-26 04:38:29.034918 | orchestrator | 2026-03-26 04:38:29.034927 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-03-26 04:38:29.034937 | orchestrator | Thursday 26 March 2026 04:38:24 +0000 (0:00:01.563) 0:06:50.478 ******** 2026-03-26 04:38:29.034947 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-26 04:38:29.034958 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:38:29.034968 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-26 04:38:29.035010 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:38:29.035029 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-26 04:38:29.035040 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:38:29.035049 | orchestrator | 2026-03-26 04:38:29.035059 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-03-26 04:38:29.035069 | orchestrator | Thursday 26 March 2026 04:38:26 +0000 (0:00:02.273) 0:06:52.752 ******** 2026-03-26 04:38:29.035079 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-26 04:38:29.035088 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:38:29.035098 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-26 04:38:29.035108 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:38:29.035125 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-26 04:39:03.034910 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:39:03.035051 | orchestrator | 2026-03-26 04:39:03.035084 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-03-26 04:39:03.035097 | orchestrator | Thursday 26 March 2026 04:38:29 +0000 (0:00:02.574) 0:06:55.326 ******** 2026-03-26 04:39:03.035110 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:39:03.035121 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:39:03.035131 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:39:03.035142 | orchestrator | 2026-03-26 04:39:03.035153 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-26 04:39:03.035164 | orchestrator | Thursday 26 March 2026 04:38:31 +0000 (0:00:02.312) 0:06:57.639 ******** 2026-03-26 04:39:03.035175 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:39:03.035186 | orchestrator | ok: [testbed-node-1] 2026-03-26 04:39:03.035197 | orchestrator | ok: [testbed-node-2] 2026-03-26 04:39:03.035208 | orchestrator | 2026-03-26 04:39:03.035219 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-26 04:39:03.035229 | orchestrator | Thursday 26 March 2026 04:38:35 +0000 (0:00:03.768) 0:07:01.407 ******** 2026-03-26 04:39:03.035240 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:39:03.035251 | orchestrator | ok: [testbed-node-1] 2026-03-26 04:39:03.035283 | orchestrator | ok: [testbed-node-2] 2026-03-26 04:39:03.035294 | orchestrator | 2026-03-26 04:39:03.035305 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-03-26 04:39:03.035316 | orchestrator | Thursday 26 March 2026 04:38:39 +0000 (0:00:03.955) 0:07:05.363 ******** 2026-03-26 04:39:03.035326 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-03-26 04:39:03.035339 | orchestrator | 2026-03-26 04:39:03.035350 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-03-26 04:39:03.035360 | orchestrator | Thursday 26 March 2026 04:38:41 +0000 (0:00:02.298) 0:07:07.662 ******** 2026-03-26 04:39:03.035373 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-26 04:39:03.035387 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:39:03.035398 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-26 04:39:03.035410 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:39:03.035421 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-26 04:39:03.035432 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:39:03.035443 | orchestrator | 2026-03-26 04:39:03.035456 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-03-26 04:39:03.035474 | orchestrator | Thursday 26 March 2026 04:38:43 +0000 (0:00:02.421) 0:07:10.083 ******** 2026-03-26 04:39:03.035488 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-26 04:39:03.035502 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:39:03.035540 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-26 04:39:03.035562 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:39:03.035575 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-26 04:39:03.035588 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:39:03.035601 | orchestrator | 2026-03-26 04:39:03.035614 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-03-26 04:39:03.035627 | orchestrator | Thursday 26 March 2026 04:38:46 +0000 (0:00:02.444) 0:07:12.527 ******** 2026-03-26 04:39:03.035639 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:39:03.035652 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:39:03.035665 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:39:03.035677 | orchestrator | 2026-03-26 04:39:03.035690 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-26 04:39:03.035702 | orchestrator | Thursday 26 March 2026 04:38:48 +0000 (0:00:02.499) 0:07:15.027 ******** 2026-03-26 04:39:03.035715 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:39:03.035728 | orchestrator | ok: [testbed-node-1] 2026-03-26 04:39:03.035741 | orchestrator | ok: [testbed-node-2] 2026-03-26 04:39:03.035753 | orchestrator | 2026-03-26 04:39:03.035766 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-26 04:39:03.035779 | orchestrator | Thursday 26 March 2026 04:38:52 +0000 (0:00:03.669) 0:07:18.697 ******** 2026-03-26 04:39:03.035791 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:39:03.035804 | orchestrator | ok: [testbed-node-1] 2026-03-26 04:39:03.035814 | orchestrator | ok: [testbed-node-2] 2026-03-26 04:39:03.035825 | orchestrator | 2026-03-26 04:39:03.035836 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-03-26 04:39:03.035847 | orchestrator | Thursday 26 March 2026 04:38:56 +0000 (0:00:04.374) 0:07:23.071 ******** 2026-03-26 04:39:03.035857 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 04:39:03.035868 | orchestrator | 2026-03-26 04:39:03.035879 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-03-26 04:39:03.035890 | orchestrator | Thursday 26 March 2026 04:38:59 +0000 (0:00:02.593) 0:07:25.664 ******** 2026-03-26 04:39:03.035902 | orchestrator | ok: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-26 04:39:03.035916 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-26 04:39:03.035946 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-26 04:39:04.267495 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-26 04:39:04.267623 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-26 04:39:04.267654 | orchestrator | ok: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-26 04:39:04.267679 | orchestrator | ok: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-26 04:39:04.267730 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-26 04:39:04.267795 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-26 04:39:04.267817 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-26 04:39:04.267837 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-26 04:39:04.267855 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-26 04:39:04.267874 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-26 04:39:04.267893 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-26 04:39:04.267923 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-26 04:39:04.267943 | orchestrator | 2026-03-26 04:39:04.268016 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-03-26 04:39:05.248908 | orchestrator | Thursday 26 March 2026 04:39:04 +0000 (0:00:04.908) 0:07:30.572 ******** 2026-03-26 04:39:05.249153 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-26 04:39:05.249183 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-26 04:39:05.249197 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-26 04:39:05.249209 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-26 04:39:05.249242 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-26 04:39:05.249254 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:39:05.249293 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-26 04:39:05.249306 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-26 04:39:05.249317 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-26 04:39:05.249328 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-26 04:39:05.249339 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-26 04:39:05.249357 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:39:05.249368 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-26 04:39:05.249391 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-26 04:39:23.980912 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-26 04:39:23.981069 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-26 04:39:23.981087 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-26 04:39:23.981125 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:39:23.981140 | orchestrator | 2026-03-26 04:39:23.981152 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-03-26 04:39:23.981165 | orchestrator | Thursday 26 March 2026 04:39:06 +0000 (0:00:02.124) 0:07:32.697 ******** 2026-03-26 04:39:23.981177 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-26 04:39:23.981189 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-26 04:39:23.981202 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:39:23.981214 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-26 04:39:23.981225 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-26 04:39:23.981236 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:39:23.981247 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-26 04:39:23.981258 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-26 04:39:23.981285 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:39:23.981296 | orchestrator | 2026-03-26 04:39:23.981307 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-03-26 04:39:23.981318 | orchestrator | Thursday 26 March 2026 04:39:08 +0000 (0:00:02.116) 0:07:34.813 ******** 2026-03-26 04:39:23.981329 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:39:23.981341 | orchestrator | ok: [testbed-node-1] 2026-03-26 04:39:23.981352 | orchestrator | ok: [testbed-node-2] 2026-03-26 04:39:23.981363 | orchestrator | 2026-03-26 04:39:23.981374 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-03-26 04:39:23.981385 | orchestrator | Thursday 26 March 2026 04:39:10 +0000 (0:00:02.256) 0:07:37.070 ******** 2026-03-26 04:39:23.981395 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:39:23.981406 | orchestrator | ok: [testbed-node-1] 2026-03-26 04:39:23.981448 | orchestrator | ok: [testbed-node-2] 2026-03-26 04:39:23.981462 | orchestrator | 2026-03-26 04:39:23.981475 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-03-26 04:39:23.981499 | orchestrator | Thursday 26 March 2026 04:39:13 +0000 (0:00:03.090) 0:07:40.160 ******** 2026-03-26 04:39:23.981511 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 04:39:23.981524 | orchestrator | 2026-03-26 04:39:23.981536 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-03-26 04:39:23.981549 | orchestrator | Thursday 26 March 2026 04:39:16 +0000 (0:00:02.574) 0:07:42.734 ******** 2026-03-26 04:39:23.981567 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-26 04:39:23.981589 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-26 04:39:23.981603 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-26 04:39:23.981631 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-26 04:39:26.303691 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-26 04:39:26.303822 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-26 04:39:26.303839 | orchestrator | 2026-03-26 04:39:26.303853 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-03-26 04:39:26.303866 | orchestrator | Thursday 26 March 2026 04:39:23 +0000 (0:00:07.544) 0:07:50.279 ******** 2026-03-26 04:39:26.303894 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-03-26 04:39:26.303928 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-03-26 04:39:26.303997 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:39:26.304010 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-03-26 04:39:26.304023 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-03-26 04:39:26.304035 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:39:26.304052 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-03-26 04:39:26.304074 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-03-26 04:39:37.380675 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:39:37.380819 | orchestrator | 2026-03-26 04:39:37.380834 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-03-26 04:39:37.380847 | orchestrator | Thursday 26 March 2026 04:39:26 +0000 (0:00:02.320) 0:07:52.600 ******** 2026-03-26 04:39:37.380859 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-03-26 04:39:37.380875 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-03-26 04:39:37.380889 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-03-26 04:39:37.380901 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:39:37.380911 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-03-26 04:39:37.380921 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-03-26 04:39:37.380996 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-03-26 04:39:37.381007 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:39:37.381017 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-03-26 04:39:37.381027 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-03-26 04:39:37.381056 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-03-26 04:39:37.381067 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:39:37.381076 | orchestrator | 2026-03-26 04:39:37.381086 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-03-26 04:39:37.381096 | orchestrator | Thursday 26 March 2026 04:39:28 +0000 (0:00:01.845) 0:07:54.446 ******** 2026-03-26 04:39:37.381135 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:39:37.381145 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:39:37.381155 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:39:37.381164 | orchestrator | 2026-03-26 04:39:37.381175 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-03-26 04:39:37.381186 | orchestrator | Thursday 26 March 2026 04:39:29 +0000 (0:00:01.483) 0:07:55.929 ******** 2026-03-26 04:39:37.381197 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:39:37.381208 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:39:37.381219 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:39:37.381231 | orchestrator | 2026-03-26 04:39:37.381242 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-03-26 04:39:37.381253 | orchestrator | Thursday 26 March 2026 04:39:31 +0000 (0:00:02.323) 0:07:58.252 ******** 2026-03-26 04:39:37.381264 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 04:39:37.381276 | orchestrator | 2026-03-26 04:39:37.381287 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-03-26 04:39:37.381297 | orchestrator | Thursday 26 March 2026 04:39:34 +0000 (0:00:02.722) 0:08:00.974 ******** 2026-03-26 04:39:37.381332 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-03-26 04:39:37.381349 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-26 04:39:37.381361 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:39:37.381374 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:39:37.381401 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-26 04:39:37.381422 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-03-26 04:39:39.450536 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-26 04:39:39.450694 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:39:39.450712 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:39:39.450724 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-26 04:39:39.450806 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-03-26 04:39:39.450850 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-26 04:39:39.450884 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:39:39.450897 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:39:39.450908 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-26 04:39:39.450920 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-03-26 04:39:39.450977 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-03-26 04:39:39.450991 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:39:39.451012 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:39:41.505463 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-26 04:39:41.505564 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-03-26 04:39:41.505610 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-03-26 04:39:41.505617 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-03-26 04:39:41.505635 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:39:41.505641 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:39:41.505645 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-03-26 04:39:41.505654 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-26 04:39:41.505661 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:39:41.505665 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:39:41.505670 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-26 04:39:41.505674 | orchestrator | 2026-03-26 04:39:41.505680 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-03-26 04:39:41.505685 | orchestrator | Thursday 26 March 2026 04:39:40 +0000 (0:00:06.034) 0:08:07.009 ******** 2026-03-26 04:39:41.505696 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-03-26 04:39:41.893193 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-26 04:39:41.893362 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:39:41.893399 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:39:41.893412 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-26 04:39:41.893426 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-03-26 04:39:41.893461 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-03-26 04:39:41.893475 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:39:41.893496 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:39:41.893513 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-26 04:39:41.893526 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:39:41.893540 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-03-26 04:39:41.893553 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-26 04:39:41.893564 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:39:41.893585 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:39:42.097416 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-26 04:39:42.097569 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-03-26 04:39:42.097590 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-03-26 04:39:42.097605 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:39:42.097618 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:39:42.097648 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-26 04:39:42.097694 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:39:42.097714 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-03-26 04:39:42.097727 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-26 04:39:42.097740 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:39:42.097751 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:39:42.097763 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-26 04:39:42.097783 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-03-26 04:39:55.609964 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-03-26 04:39:55.610191 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:39:55.610218 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:39:55.610237 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-26 04:39:55.610257 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:39:55.610276 | orchestrator | 2026-03-26 04:39:55.610296 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-03-26 04:39:55.610318 | orchestrator | Thursday 26 March 2026 04:39:43 +0000 (0:00:02.577) 0:08:09.587 ******** 2026-03-26 04:39:55.610338 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-03-26 04:39:55.610394 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-03-26 04:39:55.610419 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-03-26 04:39:55.610467 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-03-26 04:39:55.610488 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:39:55.610509 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-03-26 04:39:55.610540 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-03-26 04:39:55.610561 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-03-26 04:39:55.610580 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-03-26 04:39:55.610599 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-03-26 04:39:55.610617 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:39:55.610638 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-03-26 04:39:55.610659 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-03-26 04:39:55.610690 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-03-26 04:39:55.610711 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:39:55.610731 | orchestrator | 2026-03-26 04:39:55.610752 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-03-26 04:39:55.610771 | orchestrator | Thursday 26 March 2026 04:39:45 +0000 (0:00:01.831) 0:08:11.419 ******** 2026-03-26 04:39:55.610789 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:39:55.610806 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:39:55.610823 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:39:55.610842 | orchestrator | 2026-03-26 04:39:55.610857 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-03-26 04:39:55.610874 | orchestrator | Thursday 26 March 2026 04:39:47 +0000 (0:00:01.975) 0:08:13.394 ******** 2026-03-26 04:39:55.610891 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:39:55.610908 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:39:55.610956 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:39:55.610974 | orchestrator | 2026-03-26 04:39:55.610990 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-03-26 04:39:55.611007 | orchestrator | Thursday 26 March 2026 04:39:49 +0000 (0:00:02.205) 0:08:15.599 ******** 2026-03-26 04:39:55.611023 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 04:39:55.611039 | orchestrator | 2026-03-26 04:39:55.611055 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-03-26 04:39:55.611072 | orchestrator | Thursday 26 March 2026 04:39:51 +0000 (0:00:02.350) 0:08:17.950 ******** 2026-03-26 04:39:55.611119 | orchestrator | ok: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-26 04:40:13.080896 | orchestrator | ok: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-26 04:40:13.081112 | orchestrator | ok: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-26 04:40:13.081145 | orchestrator | 2026-03-26 04:40:13.081170 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-03-26 04:40:13.081192 | orchestrator | Thursday 26 March 2026 04:39:55 +0000 (0:00:03.955) 0:08:21.905 ******** 2026-03-26 04:40:13.081214 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-26 04:40:13.081281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-26 04:40:13.081306 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:40:13.081327 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:40:13.081348 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-26 04:40:13.081382 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:40:13.081403 | orchestrator | 2026-03-26 04:40:13.081425 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-03-26 04:40:13.081446 | orchestrator | Thursday 26 March 2026 04:39:57 +0000 (0:00:01.510) 0:08:23.416 ******** 2026-03-26 04:40:13.081469 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-26 04:40:13.081492 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:40:13.081512 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-26 04:40:13.081533 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:40:13.081555 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-26 04:40:13.081576 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:40:13.081596 | orchestrator | 2026-03-26 04:40:13.081617 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-03-26 04:40:13.081638 | orchestrator | Thursday 26 March 2026 04:39:58 +0000 (0:00:01.471) 0:08:24.888 ******** 2026-03-26 04:40:13.081659 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:40:13.081679 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:40:13.081701 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:40:13.081720 | orchestrator | 2026-03-26 04:40:13.081737 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-03-26 04:40:13.081748 | orchestrator | Thursday 26 March 2026 04:40:00 +0000 (0:00:01.974) 0:08:26.862 ******** 2026-03-26 04:40:13.081759 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:40:13.081771 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:40:13.081782 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:40:13.081793 | orchestrator | 2026-03-26 04:40:13.081803 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-03-26 04:40:13.081814 | orchestrator | Thursday 26 March 2026 04:40:02 +0000 (0:00:02.285) 0:08:29.147 ******** 2026-03-26 04:40:13.081825 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 04:40:13.081835 | orchestrator | 2026-03-26 04:40:13.081846 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-03-26 04:40:13.081856 | orchestrator | Thursday 26 March 2026 04:40:05 +0000 (0:00:02.516) 0:08:31.663 ******** 2026-03-26 04:40:13.081875 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-03-26 04:40:13.081939 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-03-26 04:40:14.830685 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-03-26 04:40:14.830816 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-03-26 04:40:14.830840 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-03-26 04:40:14.830898 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-03-26 04:40:14.830960 | orchestrator | 2026-03-26 04:40:14.830974 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-03-26 04:40:14.830987 | orchestrator | Thursday 26 March 2026 04:40:13 +0000 (0:00:07.717) 0:08:39.381 ******** 2026-03-26 04:40:14.831000 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-03-26 04:40:14.831054 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-03-26 04:40:14.831068 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:40:14.831085 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-03-26 04:40:14.831118 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-03-26 04:40:37.496346 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:40:37.496465 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-03-26 04:40:37.496490 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-03-26 04:40:37.496533 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:40:37.496547 | orchestrator | 2026-03-26 04:40:37.496560 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-03-26 04:40:37.496572 | orchestrator | Thursday 26 March 2026 04:40:14 +0000 (0:00:01.751) 0:08:41.132 ******** 2026-03-26 04:40:37.496585 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-03-26 04:40:37.496616 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-03-26 04:40:37.496630 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-03-26 04:40:37.496643 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-03-26 04:40:37.496654 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:40:37.496666 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-03-26 04:40:37.496677 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-03-26 04:40:37.496705 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-03-26 04:40:37.496717 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-03-26 04:40:37.496728 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:40:37.496739 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-03-26 04:40:37.496750 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-03-26 04:40:37.496762 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-03-26 04:40:37.496773 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-03-26 04:40:37.496794 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:40:37.496805 | orchestrator | 2026-03-26 04:40:37.496816 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-03-26 04:40:37.496827 | orchestrator | Thursday 26 March 2026 04:40:17 +0000 (0:00:02.189) 0:08:43.322 ******** 2026-03-26 04:40:37.496838 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:40:37.496850 | orchestrator | ok: [testbed-node-1] 2026-03-26 04:40:37.496861 | orchestrator | ok: [testbed-node-2] 2026-03-26 04:40:37.496871 | orchestrator | 2026-03-26 04:40:37.496882 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-03-26 04:40:37.496921 | orchestrator | Thursday 26 March 2026 04:40:19 +0000 (0:00:02.377) 0:08:45.700 ******** 2026-03-26 04:40:37.496933 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:40:37.496944 | orchestrator | ok: [testbed-node-1] 2026-03-26 04:40:37.496956 | orchestrator | ok: [testbed-node-2] 2026-03-26 04:40:37.496967 | orchestrator | 2026-03-26 04:40:37.496979 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-03-26 04:40:37.496991 | orchestrator | Thursday 26 March 2026 04:40:22 +0000 (0:00:03.317) 0:08:49.017 ******** 2026-03-26 04:40:37.497003 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:40:37.497014 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:40:37.497025 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:40:37.497037 | orchestrator | 2026-03-26 04:40:37.497049 | orchestrator | TASK [include_role : trove] **************************************************** 2026-03-26 04:40:37.497061 | orchestrator | Thursday 26 March 2026 04:40:24 +0000 (0:00:01.504) 0:08:50.522 ******** 2026-03-26 04:40:37.497080 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:40:37.497092 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:40:37.497103 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:40:37.497114 | orchestrator | 2026-03-26 04:40:37.497124 | orchestrator | TASK [include_role : venus] **************************************************** 2026-03-26 04:40:37.497135 | orchestrator | Thursday 26 March 2026 04:40:25 +0000 (0:00:01.460) 0:08:51.983 ******** 2026-03-26 04:40:37.497145 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:40:37.497156 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:40:37.497167 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:40:37.497177 | orchestrator | 2026-03-26 04:40:37.497187 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-03-26 04:40:37.497197 | orchestrator | Thursday 26 March 2026 04:40:27 +0000 (0:00:01.827) 0:08:53.811 ******** 2026-03-26 04:40:37.497208 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:40:37.497218 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:40:37.497230 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:40:37.497242 | orchestrator | 2026-03-26 04:40:37.497253 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-03-26 04:40:37.497264 | orchestrator | Thursday 26 March 2026 04:40:28 +0000 (0:00:01.355) 0:08:55.166 ******** 2026-03-26 04:40:37.497275 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:40:37.497286 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:40:37.497298 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:40:37.497310 | orchestrator | 2026-03-26 04:40:37.497321 | orchestrator | TASK [include_role : loadbalancer] ********************************************* 2026-03-26 04:40:37.497331 | orchestrator | Thursday 26 March 2026 04:40:30 +0000 (0:00:01.430) 0:08:56.597 ******** 2026-03-26 04:40:37.497342 | orchestrator | included: loadbalancer for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 04:40:37.497354 | orchestrator | 2026-03-26 04:40:37.497363 | orchestrator | TASK [service-check-containers : loadbalancer | Check containers] ************** 2026-03-26 04:40:37.497374 | orchestrator | Thursday 26 March 2026 04:40:33 +0000 (0:00:02.818) 0:08:59.416 ******** 2026-03-26 04:40:37.497401 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-26 04:40:41.881951 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-26 04:40:41.882107 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-26 04:40:41.882121 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-26 04:40:41.882144 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-26 04:40:41.882152 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-26 04:40:41.882161 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-26 04:40:41.882207 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-26 04:40:41.882215 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-26 04:40:41.882223 | orchestrator | 2026-03-26 04:40:41.882232 | orchestrator | TASK [service-check-containers : loadbalancer | Notify handlers to restart containers] *** 2026-03-26 04:40:41.882241 | orchestrator | Thursday 26 March 2026 04:40:37 +0000 (0:00:04.383) 0:09:03.799 ******** 2026-03-26 04:40:41.882249 | orchestrator | changed: [testbed-node-0] => { 2026-03-26 04:40:41.882258 | orchestrator |  "msg": "Notifying handlers" 2026-03-26 04:40:41.882265 | orchestrator | } 2026-03-26 04:40:41.882272 | orchestrator | changed: [testbed-node-1] => { 2026-03-26 04:40:41.882279 | orchestrator |  "msg": "Notifying handlers" 2026-03-26 04:40:41.882286 | orchestrator | } 2026-03-26 04:40:41.882293 | orchestrator | changed: [testbed-node-2] => { 2026-03-26 04:40:41.882300 | orchestrator |  "msg": "Notifying handlers" 2026-03-26 04:40:41.882307 | orchestrator | } 2026-03-26 04:40:41.882314 | orchestrator | 2026-03-26 04:40:41.882322 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-26 04:40:41.882329 | orchestrator | Thursday 26 March 2026 04:40:38 +0000 (0:00:01.442) 0:09:05.242 ******** 2026-03-26 04:40:41.882336 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-26 04:40:41.882349 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-26 04:40:41.882357 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-26 04:40:41.882370 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-26 04:40:41.882377 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:40:41.882390 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-26 04:42:42.592458 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-26 04:42:42.592578 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:42:42.592596 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-26 04:42:42.592626 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-26 04:42:42.592638 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-26 04:42:42.592671 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:42:42.592683 | orchestrator | 2026-03-26 04:42:42.592695 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-03-26 04:42:42.592707 | orchestrator | Thursday 26 March 2026 04:40:41 +0000 (0:00:02.937) 0:09:08.180 ******** 2026-03-26 04:42:42.592718 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:42:42.592729 | orchestrator | ok: [testbed-node-1] 2026-03-26 04:42:42.592740 | orchestrator | ok: [testbed-node-2] 2026-03-26 04:42:42.592751 | orchestrator | 2026-03-26 04:42:42.592762 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-03-26 04:42:42.592773 | orchestrator | Thursday 26 March 2026 04:40:43 +0000 (0:00:01.803) 0:09:09.984 ******** 2026-03-26 04:42:42.592783 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:42:42.592794 | orchestrator | ok: [testbed-node-1] 2026-03-26 04:42:42.592804 | orchestrator | ok: [testbed-node-2] 2026-03-26 04:42:42.592873 | orchestrator | 2026-03-26 04:42:42.592885 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-03-26 04:42:42.592896 | orchestrator | Thursday 26 March 2026 04:40:45 +0000 (0:00:01.398) 0:09:11.382 ******** 2026-03-26 04:42:42.592907 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:42:42.592918 | orchestrator | changed: [testbed-node-1] 2026-03-26 04:42:42.592928 | orchestrator | changed: [testbed-node-2] 2026-03-26 04:42:42.592939 | orchestrator | 2026-03-26 04:42:42.592950 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-03-26 04:42:42.592961 | orchestrator | Thursday 26 March 2026 04:40:52 +0000 (0:00:07.025) 0:09:18.407 ******** 2026-03-26 04:42:42.592971 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:42:42.592982 | orchestrator | changed: [testbed-node-1] 2026-03-26 04:42:42.592994 | orchestrator | changed: [testbed-node-2] 2026-03-26 04:42:42.593006 | orchestrator | 2026-03-26 04:42:42.593032 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-03-26 04:42:42.593056 | orchestrator | Thursday 26 March 2026 04:40:59 +0000 (0:00:07.440) 0:09:25.848 ******** 2026-03-26 04:42:42.593068 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:42:42.593080 | orchestrator | changed: [testbed-node-1] 2026-03-26 04:42:42.593091 | orchestrator | changed: [testbed-node-2] 2026-03-26 04:42:42.593102 | orchestrator | 2026-03-26 04:42:42.593112 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-03-26 04:42:42.593123 | orchestrator | Thursday 26 March 2026 04:41:06 +0000 (0:00:07.139) 0:09:32.988 ******** 2026-03-26 04:42:42.593134 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:42:42.593145 | orchestrator | changed: [testbed-node-1] 2026-03-26 04:42:42.593155 | orchestrator | changed: [testbed-node-2] 2026-03-26 04:42:42.593166 | orchestrator | 2026-03-26 04:42:42.593194 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-03-26 04:42:42.593207 | orchestrator | Thursday 26 March 2026 04:41:14 +0000 (0:00:07.613) 0:09:40.602 ******** 2026-03-26 04:42:42.593218 | orchestrator | ok: [testbed-node-1] 2026-03-26 04:42:42.593229 | orchestrator | ok: [testbed-node-2] 2026-03-26 04:42:42.593240 | orchestrator | 2026-03-26 04:42:42.593250 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-03-26 04:42:42.593261 | orchestrator | Thursday 26 March 2026 04:41:18 +0000 (0:00:03.825) 0:09:44.428 ******** 2026-03-26 04:42:42.593272 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:42:42.593283 | orchestrator | changed: [testbed-node-2] 2026-03-26 04:42:42.593293 | orchestrator | changed: [testbed-node-1] 2026-03-26 04:42:42.593304 | orchestrator | 2026-03-26 04:42:42.593315 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-03-26 04:42:42.593325 | orchestrator | Thursday 26 March 2026 04:41:31 +0000 (0:00:13.028) 0:09:57.456 ******** 2026-03-26 04:42:42.593345 | orchestrator | ok: [testbed-node-2] 2026-03-26 04:42:42.593356 | orchestrator | ok: [testbed-node-1] 2026-03-26 04:42:42.593367 | orchestrator | 2026-03-26 04:42:42.593378 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-03-26 04:42:42.593388 | orchestrator | Thursday 26 March 2026 04:41:35 +0000 (0:00:04.614) 0:10:02.070 ******** 2026-03-26 04:42:42.593399 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:42:42.593410 | orchestrator | changed: [testbed-node-1] 2026-03-26 04:42:42.593421 | orchestrator | changed: [testbed-node-2] 2026-03-26 04:42:42.593431 | orchestrator | 2026-03-26 04:42:42.593442 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-03-26 04:42:42.593453 | orchestrator | Thursday 26 March 2026 04:41:43 +0000 (0:00:07.753) 0:10:09.824 ******** 2026-03-26 04:42:42.593464 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:42:42.593475 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:42:42.593485 | orchestrator | changed: [testbed-node-0] 2026-03-26 04:42:42.593496 | orchestrator | 2026-03-26 04:42:42.593507 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-03-26 04:42:42.593517 | orchestrator | Thursday 26 March 2026 04:41:50 +0000 (0:00:06.838) 0:10:16.663 ******** 2026-03-26 04:42:42.593528 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:42:42.593539 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:42:42.593549 | orchestrator | changed: [testbed-node-0] 2026-03-26 04:42:42.593560 | orchestrator | 2026-03-26 04:42:42.593576 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-03-26 04:42:42.593587 | orchestrator | Thursday 26 March 2026 04:41:57 +0000 (0:00:06.796) 0:10:23.460 ******** 2026-03-26 04:42:42.593598 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:42:42.593609 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:42:42.593620 | orchestrator | changed: [testbed-node-0] 2026-03-26 04:42:42.593630 | orchestrator | 2026-03-26 04:42:42.593641 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-03-26 04:42:42.593652 | orchestrator | Thursday 26 March 2026 04:42:04 +0000 (0:00:06.957) 0:10:30.417 ******** 2026-03-26 04:42:42.593662 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:42:42.593673 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:42:42.593684 | orchestrator | changed: [testbed-node-0] 2026-03-26 04:42:42.593694 | orchestrator | 2026-03-26 04:42:42.593705 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for master haproxy to start] ************** 2026-03-26 04:42:42.593716 | orchestrator | Thursday 26 March 2026 04:42:11 +0000 (0:00:07.033) 0:10:37.451 ******** 2026-03-26 04:42:42.593727 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:42:42.593737 | orchestrator | 2026-03-26 04:42:42.593748 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-03-26 04:42:42.593758 | orchestrator | Thursday 26 March 2026 04:42:14 +0000 (0:00:03.645) 0:10:41.097 ******** 2026-03-26 04:42:42.593769 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:42:42.593780 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:42:42.593791 | orchestrator | changed: [testbed-node-0] 2026-03-26 04:42:42.593801 | orchestrator | 2026-03-26 04:42:42.593833 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for master proxysql to start] ************* 2026-03-26 04:42:42.593844 | orchestrator | Thursday 26 March 2026 04:42:27 +0000 (0:00:12.640) 0:10:53.737 ******** 2026-03-26 04:42:42.593855 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:42:42.593866 | orchestrator | 2026-03-26 04:42:42.593877 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-03-26 04:42:42.593888 | orchestrator | Thursday 26 March 2026 04:42:31 +0000 (0:00:03.624) 0:10:57.362 ******** 2026-03-26 04:42:42.593898 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:42:42.593909 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:42:42.593920 | orchestrator | changed: [testbed-node-0] 2026-03-26 04:42:42.593931 | orchestrator | 2026-03-26 04:42:42.593941 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-03-26 04:42:42.593952 | orchestrator | Thursday 26 March 2026 04:42:37 +0000 (0:00:06.740) 0:11:04.102 ******** 2026-03-26 04:42:42.593970 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:42:42.593981 | orchestrator | ok: [testbed-node-2] 2026-03-26 04:42:42.593991 | orchestrator | ok: [testbed-node-1] 2026-03-26 04:42:42.594002 | orchestrator | 2026-03-26 04:42:42.594013 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-03-26 04:42:42.594082 | orchestrator | Thursday 26 March 2026 04:42:39 +0000 (0:00:01.974) 0:11:06.076 ******** 2026-03-26 04:42:42.594093 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:42:42.594112 | orchestrator | ok: [testbed-node-1] 2026-03-26 04:42:42.594123 | orchestrator | ok: [testbed-node-2] 2026-03-26 04:42:42.594134 | orchestrator | 2026-03-26 04:42:42.594144 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-26 04:42:42.594156 | orchestrator | testbed-node-0 : ok=129  changed=29  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-03-26 04:42:42.594169 | orchestrator | testbed-node-1 : ok=128  changed=28  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-03-26 04:42:42.594188 | orchestrator | testbed-node-2 : ok=128  changed=28  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-03-26 04:42:43.515143 | orchestrator | 2026-03-26 04:42:43.515243 | orchestrator | 2026-03-26 04:42:43.515258 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-26 04:42:43.515273 | orchestrator | Thursday 26 March 2026 04:42:42 +0000 (0:00:02.803) 0:11:08.880 ******** 2026-03-26 04:42:43.515284 | orchestrator | =============================================================================== 2026-03-26 04:42:43.515295 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 13.03s 2026-03-26 04:42:43.515306 | orchestrator | loadbalancer : Start master proxysql container ------------------------- 12.64s 2026-03-26 04:42:43.515316 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 7.75s 2026-03-26 04:42:43.515327 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 7.72s 2026-03-26 04:42:43.515338 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 7.61s 2026-03-26 04:42:43.515348 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 7.54s 2026-03-26 04:42:43.515359 | orchestrator | loadbalancer : Stop backup haproxy container ---------------------------- 7.44s 2026-03-26 04:42:43.515369 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 7.27s 2026-03-26 04:42:43.515380 | orchestrator | loadbalancer : Stop backup proxysql container --------------------------- 7.14s 2026-03-26 04:42:43.515390 | orchestrator | loadbalancer : Start master haproxy container --------------------------- 7.03s 2026-03-26 04:42:43.515401 | orchestrator | loadbalancer : Stop backup keepalived container ------------------------- 7.03s 2026-03-26 04:42:43.515411 | orchestrator | loadbalancer : Stop master keepalived container ------------------------- 6.96s 2026-03-26 04:42:43.515422 | orchestrator | loadbalancer : Stop master haproxy container ---------------------------- 6.84s 2026-03-26 04:42:43.515432 | orchestrator | loadbalancer : Stop master proxysql container --------------------------- 6.80s 2026-03-26 04:42:43.515443 | orchestrator | loadbalancer : Start master keepalived container ------------------------ 6.74s 2026-03-26 04:42:43.515453 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 6.37s 2026-03-26 04:42:43.515484 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 6.03s 2026-03-26 04:42:43.515495 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 5.71s 2026-03-26 04:42:43.515506 | orchestrator | loadbalancer : Copying checks for services which are enabled ------------ 5.08s 2026-03-26 04:42:43.515516 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 4.92s 2026-03-26 04:42:43.819951 | orchestrator | + osism apply -a upgrade opensearch 2026-03-26 04:42:45.882905 | orchestrator | 2026-03-26 04:42:45 | INFO  | Task 880f3991-9db2-4938-9bc9-5c310c65b633 (opensearch) was prepared for execution. 2026-03-26 04:42:45.883031 | orchestrator | 2026-03-26 04:42:45 | INFO  | It takes a moment until task 880f3991-9db2-4938-9bc9-5c310c65b633 (opensearch) has been started and output is visible here. 2026-03-26 04:43:04.197227 | orchestrator | 2026-03-26 04:43:04.197340 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-26 04:43:04.197356 | orchestrator | 2026-03-26 04:43:04.197368 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-26 04:43:04.197379 | orchestrator | Thursday 26 March 2026 04:42:51 +0000 (0:00:01.527) 0:00:01.527 ******** 2026-03-26 04:43:04.197390 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:43:04.197402 | orchestrator | ok: [testbed-node-1] 2026-03-26 04:43:04.197413 | orchestrator | ok: [testbed-node-2] 2026-03-26 04:43:04.197423 | orchestrator | 2026-03-26 04:43:04.197434 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-26 04:43:04.197446 | orchestrator | Thursday 26 March 2026 04:42:53 +0000 (0:00:01.660) 0:00:03.188 ******** 2026-03-26 04:43:04.197457 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-03-26 04:43:04.197468 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-03-26 04:43:04.197478 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-03-26 04:43:04.197489 | orchestrator | 2026-03-26 04:43:04.197499 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-03-26 04:43:04.197510 | orchestrator | 2026-03-26 04:43:04.197520 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-26 04:43:04.197531 | orchestrator | Thursday 26 March 2026 04:42:55 +0000 (0:00:02.422) 0:00:05.611 ******** 2026-03-26 04:43:04.197542 | orchestrator | included: /ansible/roles/opensearch/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 04:43:04.197553 | orchestrator | 2026-03-26 04:43:04.197563 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-03-26 04:43:04.197574 | orchestrator | Thursday 26 March 2026 04:42:57 +0000 (0:00:02.344) 0:00:07.955 ******** 2026-03-26 04:43:04.197585 | orchestrator | ok: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-26 04:43:04.197595 | orchestrator | ok: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-26 04:43:04.197606 | orchestrator | ok: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-26 04:43:04.197617 | orchestrator | 2026-03-26 04:43:04.197628 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-03-26 04:43:04.197638 | orchestrator | Thursday 26 March 2026 04:42:59 +0000 (0:00:02.096) 0:00:10.052 ******** 2026-03-26 04:43:04.197653 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-26 04:43:04.197670 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-26 04:43:04.197739 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-26 04:43:04.197755 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-26 04:43:04.197771 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-26 04:43:04.197791 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-26 04:43:04.197853 | orchestrator | 2026-03-26 04:43:04.197867 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-26 04:43:04.197879 | orchestrator | Thursday 26 March 2026 04:43:02 +0000 (0:00:02.431) 0:00:12.483 ******** 2026-03-26 04:43:04.197893 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 04:43:04.197904 | orchestrator | 2026-03-26 04:43:04.197924 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-03-26 04:43:09.526672 | orchestrator | Thursday 26 March 2026 04:43:04 +0000 (0:00:01.804) 0:00:14.287 ******** 2026-03-26 04:43:09.526770 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-26 04:43:09.526784 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-26 04:43:09.526792 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-26 04:43:09.526870 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-26 04:43:09.526902 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-26 04:43:09.526911 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-26 04:43:09.526918 | orchestrator | 2026-03-26 04:43:09.526926 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-03-26 04:43:09.526939 | orchestrator | Thursday 26 March 2026 04:43:07 +0000 (0:00:03.512) 0:00:17.800 ******** 2026-03-26 04:43:09.526945 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-03-26 04:43:09.526962 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-03-26 04:43:11.355229 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:43:11.355327 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-03-26 04:43:11.355345 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-03-26 04:43:11.355379 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:43:11.355405 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-03-26 04:43:11.355431 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-03-26 04:43:11.355444 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:43:11.355454 | orchestrator | 2026-03-26 04:43:11.355465 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-03-26 04:43:11.355476 | orchestrator | Thursday 26 March 2026 04:43:09 +0000 (0:00:01.824) 0:00:19.625 ******** 2026-03-26 04:43:11.355485 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-03-26 04:43:11.355496 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-03-26 04:43:11.355513 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:43:11.355528 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-03-26 04:43:11.355546 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-03-26 04:43:15.238916 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:43:15.239034 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-03-26 04:43:15.239081 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-03-26 04:43:15.239097 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:43:15.239109 | orchestrator | 2026-03-26 04:43:15.239121 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-03-26 04:43:15.239132 | orchestrator | Thursday 26 March 2026 04:43:11 +0000 (0:00:01.823) 0:00:21.449 ******** 2026-03-26 04:43:15.239159 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-26 04:43:15.239189 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-26 04:43:15.239202 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-26 04:43:15.239223 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-26 04:43:15.239240 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-26 04:43:15.239263 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-26 04:43:28.868942 | orchestrator | 2026-03-26 04:43:28.869111 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-03-26 04:43:28.869137 | orchestrator | Thursday 26 March 2026 04:43:15 +0000 (0:00:03.883) 0:00:25.333 ******** 2026-03-26 04:43:28.869153 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:43:28.869173 | orchestrator | ok: [testbed-node-1] 2026-03-26 04:43:28.869191 | orchestrator | ok: [testbed-node-2] 2026-03-26 04:43:28.869209 | orchestrator | 2026-03-26 04:43:28.869226 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-03-26 04:43:28.869245 | orchestrator | Thursday 26 March 2026 04:43:18 +0000 (0:00:03.433) 0:00:28.766 ******** 2026-03-26 04:43:28.869262 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:43:28.869280 | orchestrator | ok: [testbed-node-1] 2026-03-26 04:43:28.869290 | orchestrator | ok: [testbed-node-2] 2026-03-26 04:43:28.869300 | orchestrator | 2026-03-26 04:43:28.869309 | orchestrator | TASK [service-check-containers : opensearch | Check containers] **************** 2026-03-26 04:43:28.869319 | orchestrator | Thursday 26 March 2026 04:43:21 +0000 (0:00:03.101) 0:00:31.867 ******** 2026-03-26 04:43:28.869332 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-26 04:43:28.869361 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-26 04:43:28.869371 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-03-26 04:43:28.869411 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-26 04:43:28.869445 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-26 04:43:28.869485 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-03-26 04:43:28.869503 | orchestrator | 2026-03-26 04:43:28.869521 | orchestrator | TASK [service-check-containers : opensearch | Notify handlers to restart containers] *** 2026-03-26 04:43:28.869539 | orchestrator | Thursday 26 March 2026 04:43:25 +0000 (0:00:03.627) 0:00:35.495 ******** 2026-03-26 04:43:28.869556 | orchestrator | changed: [testbed-node-0] => { 2026-03-26 04:43:28.869573 | orchestrator |  "msg": "Notifying handlers" 2026-03-26 04:43:28.869589 | orchestrator | } 2026-03-26 04:43:28.869604 | orchestrator | changed: [testbed-node-1] => { 2026-03-26 04:43:28.869619 | orchestrator |  "msg": "Notifying handlers" 2026-03-26 04:43:28.869635 | orchestrator | } 2026-03-26 04:43:28.869651 | orchestrator | changed: [testbed-node-2] => { 2026-03-26 04:43:28.869677 | orchestrator |  "msg": "Notifying handlers" 2026-03-26 04:43:28.869694 | orchestrator | } 2026-03-26 04:43:28.869711 | orchestrator | 2026-03-26 04:43:28.869727 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-26 04:43:28.869744 | orchestrator | Thursday 26 March 2026 04:43:26 +0000 (0:00:01.361) 0:00:36.856 ******** 2026-03-26 04:43:28.869766 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-03-26 04:46:29.926527 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-03-26 04:46:29.926649 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:46:29.926685 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-03-26 04:46:29.926700 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-03-26 04:46:29.926785 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:46:29.926818 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-03-26 04:46:29.926831 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-03-26 04:46:29.926843 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:46:29.926854 | orchestrator | 2026-03-26 04:46:29.926866 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-26 04:46:29.926878 | orchestrator | Thursday 26 March 2026 04:43:28 +0000 (0:00:02.107) 0:00:38.964 ******** 2026-03-26 04:46:29.926889 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:46:29.926900 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:46:29.926910 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:46:29.926921 | orchestrator | 2026-03-26 04:46:29.926938 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-26 04:46:29.926949 | orchestrator | Thursday 26 March 2026 04:43:30 +0000 (0:00:01.662) 0:00:40.627 ******** 2026-03-26 04:46:29.926960 | orchestrator | 2026-03-26 04:46:29.926970 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-26 04:46:29.926981 | orchestrator | Thursday 26 March 2026 04:43:30 +0000 (0:00:00.475) 0:00:41.103 ******** 2026-03-26 04:46:29.926992 | orchestrator | 2026-03-26 04:46:29.927003 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-26 04:46:29.927013 | orchestrator | Thursday 26 March 2026 04:43:31 +0000 (0:00:00.445) 0:00:41.549 ******** 2026-03-26 04:46:29.927036 | orchestrator | 2026-03-26 04:46:29.927048 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-03-26 04:46:29.927061 | orchestrator | Thursday 26 March 2026 04:43:32 +0000 (0:00:00.853) 0:00:42.402 ******** 2026-03-26 04:46:29.927074 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:46:29.927087 | orchestrator | 2026-03-26 04:46:29.927100 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-03-26 04:46:29.927112 | orchestrator | Thursday 26 March 2026 04:43:35 +0000 (0:00:03.592) 0:00:45.994 ******** 2026-03-26 04:46:29.927124 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:46:29.927136 | orchestrator | 2026-03-26 04:46:29.927149 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-03-26 04:46:29.927161 | orchestrator | Thursday 26 March 2026 04:43:40 +0000 (0:00:04.453) 0:00:50.448 ******** 2026-03-26 04:46:29.927173 | orchestrator | changed: [testbed-node-1] 2026-03-26 04:46:29.927185 | orchestrator | changed: [testbed-node-0] 2026-03-26 04:46:29.927197 | orchestrator | changed: [testbed-node-2] 2026-03-26 04:46:29.927209 | orchestrator | 2026-03-26 04:46:29.927221 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-03-26 04:46:29.927233 | orchestrator | Thursday 26 March 2026 04:44:49 +0000 (0:01:09.534) 0:01:59.982 ******** 2026-03-26 04:46:29.927245 | orchestrator | changed: [testbed-node-1] 2026-03-26 04:46:29.927257 | orchestrator | changed: [testbed-node-0] 2026-03-26 04:46:29.927270 | orchestrator | changed: [testbed-node-2] 2026-03-26 04:46:29.927281 | orchestrator | 2026-03-26 04:46:29.927294 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-26 04:46:29.927306 | orchestrator | Thursday 26 March 2026 04:46:20 +0000 (0:01:30.495) 0:03:30.478 ******** 2026-03-26 04:46:29.927319 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 04:46:29.927332 | orchestrator | 2026-03-26 04:46:29.927344 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-03-26 04:46:29.927356 | orchestrator | Thursday 26 March 2026 04:46:22 +0000 (0:00:01.688) 0:03:32.166 ******** 2026-03-26 04:46:29.927368 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:46:29.927381 | orchestrator | 2026-03-26 04:46:29.927393 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-03-26 04:46:29.927405 | orchestrator | Thursday 26 March 2026 04:46:25 +0000 (0:00:03.316) 0:03:35.483 ******** 2026-03-26 04:46:29.927418 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:46:29.927430 | orchestrator | 2026-03-26 04:46:29.927442 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-03-26 04:46:29.927453 | orchestrator | Thursday 26 March 2026 04:46:28 +0000 (0:00:03.285) 0:03:38.768 ******** 2026-03-26 04:46:29.927464 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:46:29.927475 | orchestrator | 2026-03-26 04:46:29.927498 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-03-26 04:46:29.927516 | orchestrator | Thursday 26 March 2026 04:46:29 +0000 (0:00:01.249) 0:03:40.017 ******** 2026-03-26 04:46:32.291502 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:46:32.291605 | orchestrator | 2026-03-26 04:46:32.291621 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-26 04:46:32.291635 | orchestrator | testbed-node-0 : ok=19  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-26 04:46:32.291648 | orchestrator | testbed-node-1 : ok=15  changed=5  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-26 04:46:32.291659 | orchestrator | testbed-node-2 : ok=15  changed=5  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-26 04:46:32.291669 | orchestrator | 2026-03-26 04:46:32.291680 | orchestrator | 2026-03-26 04:46:32.291691 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-26 04:46:32.291821 | orchestrator | Thursday 26 March 2026 04:46:31 +0000 (0:00:01.996) 0:03:42.014 ******** 2026-03-26 04:46:32.291838 | orchestrator | =============================================================================== 2026-03-26 04:46:32.291849 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 90.50s 2026-03-26 04:46:32.291860 | orchestrator | opensearch : Restart opensearch container ------------------------------ 69.53s 2026-03-26 04:46:32.291871 | orchestrator | opensearch : Perform a flush -------------------------------------------- 4.45s 2026-03-26 04:46:32.291882 | orchestrator | opensearch : Copying over config.json files for services ---------------- 3.88s 2026-03-26 04:46:32.291893 | orchestrator | service-check-containers : opensearch | Check containers ---------------- 3.63s 2026-03-26 04:46:32.291904 | orchestrator | opensearch : Disable shard allocation ----------------------------------- 3.59s 2026-03-26 04:46:32.291915 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 3.51s 2026-03-26 04:46:32.291925 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 3.43s 2026-03-26 04:46:32.291936 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 3.32s 2026-03-26 04:46:32.291947 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 3.29s 2026-03-26 04:46:32.291975 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 3.10s 2026-03-26 04:46:32.291986 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 2.43s 2026-03-26 04:46:32.291997 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.42s 2026-03-26 04:46:32.292008 | orchestrator | opensearch : include_tasks ---------------------------------------------- 2.34s 2026-03-26 04:46:32.292019 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.11s 2026-03-26 04:46:32.292032 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 2.10s 2026-03-26 04:46:32.292045 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.00s 2026-03-26 04:46:32.292057 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.82s 2026-03-26 04:46:32.292070 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.82s 2026-03-26 04:46:32.292083 | orchestrator | opensearch : include_tasks ---------------------------------------------- 1.80s 2026-03-26 04:46:32.600324 | orchestrator | + osism apply -a upgrade memcached 2026-03-26 04:46:34.746258 | orchestrator | 2026-03-26 04:46:34 | INFO  | Task 08a40978-d999-48b6-a67e-60a08e1bdf47 (memcached) was prepared for execution. 2026-03-26 04:46:34.746383 | orchestrator | 2026-03-26 04:46:34 | INFO  | It takes a moment until task 08a40978-d999-48b6-a67e-60a08e1bdf47 (memcached) has been started and output is visible here. 2026-03-26 04:46:59.207655 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-03-26 04:46:59.207792 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-03-26 04:46:59.207812 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-03-26 04:46:59.207819 | orchestrator | (): 'NoneType' object is not subscriptable 2026-03-26 04:46:59.207833 | orchestrator | 2026-03-26 04:46:59.207841 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-26 04:46:59.207847 | orchestrator | 2026-03-26 04:46:59.207854 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-26 04:46:59.207861 | orchestrator | Thursday 26 March 2026 04:46:39 +0000 (0:00:01.066) 0:00:01.066 ******** 2026-03-26 04:46:59.207868 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:46:59.207875 | orchestrator | ok: [testbed-node-1] 2026-03-26 04:46:59.207883 | orchestrator | ok: [testbed-node-2] 2026-03-26 04:46:59.207910 | orchestrator | 2026-03-26 04:46:59.207917 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-26 04:46:59.207924 | orchestrator | Thursday 26 March 2026 04:46:40 +0000 (0:00:00.952) 0:00:02.018 ******** 2026-03-26 04:46:59.207930 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-03-26 04:46:59.207937 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-03-26 04:46:59.207944 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-03-26 04:46:59.207950 | orchestrator | 2026-03-26 04:46:59.207957 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-03-26 04:46:59.207964 | orchestrator | 2026-03-26 04:46:59.207970 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-03-26 04:46:59.207977 | orchestrator | Thursday 26 March 2026 04:46:41 +0000 (0:00:00.878) 0:00:02.897 ******** 2026-03-26 04:46:59.207984 | orchestrator | included: /ansible/roles/memcached/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 04:46:59.207990 | orchestrator | 2026-03-26 04:46:59.207997 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-03-26 04:46:59.208004 | orchestrator | Thursday 26 March 2026 04:46:43 +0000 (0:00:01.599) 0:00:04.496 ******** 2026-03-26 04:46:59.208011 | orchestrator | ok: [testbed-node-1] => (item=memcached) 2026-03-26 04:46:59.208018 | orchestrator | ok: [testbed-node-0] => (item=memcached) 2026-03-26 04:46:59.208024 | orchestrator | ok: [testbed-node-2] => (item=memcached) 2026-03-26 04:46:59.208031 | orchestrator | 2026-03-26 04:46:59.208038 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-03-26 04:46:59.208044 | orchestrator | Thursday 26 March 2026 04:46:44 +0000 (0:00:00.931) 0:00:05.428 ******** 2026-03-26 04:46:59.208051 | orchestrator | ok: [testbed-node-1] => (item=memcached) 2026-03-26 04:46:59.208058 | orchestrator | ok: [testbed-node-0] => (item=memcached) 2026-03-26 04:46:59.208064 | orchestrator | ok: [testbed-node-2] => (item=memcached) 2026-03-26 04:46:59.208071 | orchestrator | 2026-03-26 04:46:59.208077 | orchestrator | TASK [service-check-containers : memcached | Check containers] ***************** 2026-03-26 04:46:59.208084 | orchestrator | Thursday 26 March 2026 04:46:46 +0000 (0:00:01.886) 0:00:07.315 ******** 2026-03-26 04:46:59.208106 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-26 04:46:59.208116 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-26 04:46:59.208137 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-26 04:46:59.208150 | orchestrator | 2026-03-26 04:46:59.208157 | orchestrator | TASK [service-check-containers : memcached | Notify handlers to restart containers] *** 2026-03-26 04:46:59.208164 | orchestrator | Thursday 26 March 2026 04:46:47 +0000 (0:00:01.247) 0:00:08.562 ******** 2026-03-26 04:46:59.208171 | orchestrator | changed: [testbed-node-0] => { 2026-03-26 04:46:59.208178 | orchestrator |  "msg": "Notifying handlers" 2026-03-26 04:46:59.208184 | orchestrator | } 2026-03-26 04:46:59.208191 | orchestrator | changed: [testbed-node-1] => { 2026-03-26 04:46:59.208198 | orchestrator |  "msg": "Notifying handlers" 2026-03-26 04:46:59.208206 | orchestrator | } 2026-03-26 04:46:59.208214 | orchestrator | changed: [testbed-node-2] => { 2026-03-26 04:46:59.208221 | orchestrator |  "msg": "Notifying handlers" 2026-03-26 04:46:59.208229 | orchestrator | } 2026-03-26 04:46:59.208236 | orchestrator | 2026-03-26 04:46:59.208244 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-26 04:46:59.208251 | orchestrator | Thursday 26 March 2026 04:46:47 +0000 (0:00:00.353) 0:00:08.916 ******** 2026-03-26 04:46:59.208259 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-26 04:46:59.208267 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-26 04:46:59.208275 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_handler_task_start) in callback 2026-03-26 04:46:59.208283 | orchestrator | plugin (): 'NoneType' object is not subscriptable 2026-03-26 04:46:59.208302 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:46:59.208310 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:46:59.208318 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-26 04:46:59.208331 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:46:59.208339 | orchestrator | 2026-03-26 04:46:59.208347 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-03-26 04:46:59.208355 | orchestrator | Thursday 26 March 2026 04:46:49 +0000 (0:00:01.342) 0:00:10.259 ******** 2026-03-26 04:46:59.208363 | orchestrator | changed: [testbed-node-1] 2026-03-26 04:46:59.208369 | orchestrator | changed: [testbed-node-0] 2026-03-26 04:46:59.208380 | orchestrator | changed: [testbed-node-2] 2026-03-26 04:46:59.549670 | orchestrator | 2026-03-26 04:46:59.549815 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-26 04:46:59.549830 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-26 04:46:59.549845 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-26 04:46:59.549857 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-26 04:46:59.549868 | orchestrator | 2026-03-26 04:46:59.549879 | orchestrator | 2026-03-26 04:46:59.549891 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-26 04:46:59.549903 | orchestrator | Thursday 26 March 2026 04:46:59 +0000 (0:00:10.074) 0:00:20.334 ******** 2026-03-26 04:46:59.549914 | orchestrator | =============================================================================== 2026-03-26 04:46:59.549925 | orchestrator | memcached : Restart memcached container -------------------------------- 10.08s 2026-03-26 04:46:59.549937 | orchestrator | memcached : Copying over config.json files for services ----------------- 1.89s 2026-03-26 04:46:59.549949 | orchestrator | memcached : include_tasks ----------------------------------------------- 1.60s 2026-03-26 04:46:59.549962 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.34s 2026-03-26 04:46:59.549974 | orchestrator | service-check-containers : memcached | Check containers ----------------- 1.25s 2026-03-26 04:46:59.549986 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.95s 2026-03-26 04:46:59.549999 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.93s 2026-03-26 04:46:59.550010 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.88s 2026-03-26 04:46:59.550089 | orchestrator | service-check-containers : memcached | Notify handlers to restart containers --- 0.35s 2026-03-26 04:46:59.856905 | orchestrator | + osism apply -a upgrade redis 2026-03-26 04:47:01.950970 | orchestrator | 2026-03-26 04:47:01 | INFO  | Task 103a9efa-0088-4a4c-9ba5-5bed8cc516cc (redis) was prepared for execution. 2026-03-26 04:47:01.951090 | orchestrator | 2026-03-26 04:47:01 | INFO  | It takes a moment until task 103a9efa-0088-4a4c-9ba5-5bed8cc516cc (redis) has been started and output is visible here. 2026-03-26 04:47:19.757353 | orchestrator | 2026-03-26 04:47:19.757465 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-26 04:47:19.757483 | orchestrator | 2026-03-26 04:47:19.757495 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-26 04:47:19.757506 | orchestrator | Thursday 26 March 2026 04:47:07 +0000 (0:00:01.360) 0:00:01.360 ******** 2026-03-26 04:47:19.757518 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:47:19.757530 | orchestrator | ok: [testbed-node-1] 2026-03-26 04:47:19.757540 | orchestrator | ok: [testbed-node-2] 2026-03-26 04:47:19.757551 | orchestrator | 2026-03-26 04:47:19.757589 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-26 04:47:19.757600 | orchestrator | Thursday 26 March 2026 04:47:09 +0000 (0:00:02.088) 0:00:03.449 ******** 2026-03-26 04:47:19.757611 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-03-26 04:47:19.757622 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-03-26 04:47:19.757633 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-03-26 04:47:19.757644 | orchestrator | 2026-03-26 04:47:19.757654 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-03-26 04:47:19.757665 | orchestrator | 2026-03-26 04:47:19.757676 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-03-26 04:47:19.757687 | orchestrator | Thursday 26 March 2026 04:47:12 +0000 (0:00:02.799) 0:00:06.248 ******** 2026-03-26 04:47:19.757742 | orchestrator | included: /ansible/roles/redis/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 04:47:19.757755 | orchestrator | 2026-03-26 04:47:19.757781 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-03-26 04:47:19.757792 | orchestrator | Thursday 26 March 2026 04:47:14 +0000 (0:00:01.728) 0:00:07.977 ******** 2026-03-26 04:47:19.757808 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-26 04:47:19.757824 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-26 04:47:19.757835 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-26 04:47:19.757848 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-26 04:47:19.757886 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-26 04:47:19.757921 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-26 04:47:19.757944 | orchestrator | 2026-03-26 04:47:19.757965 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-03-26 04:47:19.757990 | orchestrator | Thursday 26 March 2026 04:47:16 +0000 (0:00:02.252) 0:00:10.229 ******** 2026-03-26 04:47:19.758003 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-26 04:47:19.758075 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-26 04:47:19.758089 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-26 04:47:19.758110 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-26 04:47:19.758148 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-26 04:47:26.776093 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-26 04:47:26.776195 | orchestrator | 2026-03-26 04:47:26.776207 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-03-26 04:47:26.776217 | orchestrator | Thursday 26 March 2026 04:47:19 +0000 (0:00:03.169) 0:00:13.399 ******** 2026-03-26 04:47:26.776227 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-26 04:47:26.776275 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-26 04:47:26.776285 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-26 04:47:26.776293 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-26 04:47:26.776320 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-26 04:47:26.776345 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-26 04:47:26.776354 | orchestrator | 2026-03-26 04:47:26.776362 | orchestrator | TASK [service-check-containers : redis | Check containers] ********************* 2026-03-26 04:47:26.776370 | orchestrator | Thursday 26 March 2026 04:47:23 +0000 (0:00:03.891) 0:00:17.291 ******** 2026-03-26 04:47:26.776382 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-26 04:47:26.776391 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-26 04:47:26.776399 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-26 04:47:26.776408 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-26 04:47:26.776425 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-26 04:47:26.776440 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-26 04:47:54.352637 | orchestrator | 2026-03-26 04:47:54.352788 | orchestrator | TASK [service-check-containers : redis | Notify handlers to restart containers] *** 2026-03-26 04:47:54.352806 | orchestrator | Thursday 26 March 2026 04:47:26 +0000 (0:00:03.135) 0:00:20.426 ******** 2026-03-26 04:47:54.352819 | orchestrator | changed: [testbed-node-0] => { 2026-03-26 04:47:54.352838 | orchestrator |  "msg": "Notifying handlers" 2026-03-26 04:47:54.352854 | orchestrator | } 2026-03-26 04:47:54.352871 | orchestrator | changed: [testbed-node-1] => { 2026-03-26 04:47:54.352887 | orchestrator |  "msg": "Notifying handlers" 2026-03-26 04:47:54.352903 | orchestrator | } 2026-03-26 04:47:54.352940 | orchestrator | changed: [testbed-node-2] => { 2026-03-26 04:47:54.352957 | orchestrator |  "msg": "Notifying handlers" 2026-03-26 04:47:54.352974 | orchestrator | } 2026-03-26 04:47:54.352990 | orchestrator | 2026-03-26 04:47:54.353008 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-26 04:47:54.353024 | orchestrator | Thursday 26 March 2026 04:47:28 +0000 (0:00:01.612) 0:00:22.038 ******** 2026-03-26 04:47:54.353043 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-03-26 04:47:54.353057 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-03-26 04:47:54.353090 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:47:54.353101 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-03-26 04:47:54.353111 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-03-26 04:47:54.353121 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:47:54.353131 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-03-26 04:47:54.353167 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-03-26 04:47:54.353180 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:47:54.353191 | orchestrator | 2026-03-26 04:47:54.353202 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-26 04:47:54.353213 | orchestrator | Thursday 26 March 2026 04:47:30 +0000 (0:00:01.925) 0:00:23.964 ******** 2026-03-26 04:47:54.353223 | orchestrator | 2026-03-26 04:47:54.353234 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-26 04:47:54.353245 | orchestrator | Thursday 26 March 2026 04:47:30 +0000 (0:00:00.466) 0:00:24.430 ******** 2026-03-26 04:47:54.353256 | orchestrator | 2026-03-26 04:47:54.353267 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-26 04:47:54.353278 | orchestrator | Thursday 26 March 2026 04:47:31 +0000 (0:00:00.450) 0:00:24.881 ******** 2026-03-26 04:47:54.353288 | orchestrator | 2026-03-26 04:47:54.353305 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-03-26 04:47:54.353321 | orchestrator | Thursday 26 March 2026 04:47:32 +0000 (0:00:00.802) 0:00:25.684 ******** 2026-03-26 04:47:54.353338 | orchestrator | changed: [testbed-node-0] 2026-03-26 04:47:54.353353 | orchestrator | changed: [testbed-node-1] 2026-03-26 04:47:54.353371 | orchestrator | changed: [testbed-node-2] 2026-03-26 04:47:54.353398 | orchestrator | 2026-03-26 04:47:54.353415 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-03-26 04:47:54.353432 | orchestrator | Thursday 26 March 2026 04:47:43 +0000 (0:00:10.979) 0:00:36.663 ******** 2026-03-26 04:47:54.353448 | orchestrator | changed: [testbed-node-1] 2026-03-26 04:47:54.353466 | orchestrator | changed: [testbed-node-0] 2026-03-26 04:47:54.353483 | orchestrator | changed: [testbed-node-2] 2026-03-26 04:47:54.353500 | orchestrator | 2026-03-26 04:47:54.353514 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-26 04:47:54.353526 | orchestrator | testbed-node-0 : ok=10  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-26 04:47:54.353538 | orchestrator | testbed-node-1 : ok=10  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-26 04:47:54.353549 | orchestrator | testbed-node-2 : ok=10  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-26 04:47:54.353559 | orchestrator | 2026-03-26 04:47:54.353568 | orchestrator | 2026-03-26 04:47:54.353577 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-26 04:47:54.353587 | orchestrator | Thursday 26 March 2026 04:47:53 +0000 (0:00:10.908) 0:00:47.572 ******** 2026-03-26 04:47:54.353597 | orchestrator | =============================================================================== 2026-03-26 04:47:54.353614 | orchestrator | redis : Restart redis container ---------------------------------------- 10.98s 2026-03-26 04:47:54.353629 | orchestrator | redis : Restart redis-sentinel container ------------------------------- 10.91s 2026-03-26 04:47:54.353645 | orchestrator | redis : Copying over redis config files --------------------------------- 3.89s 2026-03-26 04:47:54.353663 | orchestrator | redis : Copying over default config.json files -------------------------- 3.17s 2026-03-26 04:47:54.353719 | orchestrator | service-check-containers : redis | Check containers --------------------- 3.14s 2026-03-26 04:47:54.353732 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.80s 2026-03-26 04:47:54.353741 | orchestrator | redis : Ensuring config directories exist ------------------------------- 2.25s 2026-03-26 04:47:54.353751 | orchestrator | Group hosts based on Kolla action --------------------------------------- 2.09s 2026-03-26 04:47:54.353760 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.93s 2026-03-26 04:47:54.353769 | orchestrator | redis : include_tasks --------------------------------------------------- 1.73s 2026-03-26 04:47:54.353779 | orchestrator | redis : Flush handlers -------------------------------------------------- 1.72s 2026-03-26 04:47:54.353788 | orchestrator | service-check-containers : redis | Notify handlers to restart containers --- 1.61s 2026-03-26 04:47:54.661602 | orchestrator | + osism apply -a upgrade mariadb 2026-03-26 04:47:56.807469 | orchestrator | 2026-03-26 04:47:56 | INFO  | Task 44634b2e-5b51-4f9a-9bb1-cb1a991037c4 (mariadb) was prepared for execution. 2026-03-26 04:47:56.807580 | orchestrator | 2026-03-26 04:47:56 | INFO  | It takes a moment until task 44634b2e-5b51-4f9a-9bb1-cb1a991037c4 (mariadb) has been started and output is visible here. 2026-03-26 04:48:22.605791 | orchestrator | 2026-03-26 04:48:22.605899 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-26 04:48:22.605916 | orchestrator | 2026-03-26 04:48:22.605928 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-26 04:48:22.605939 | orchestrator | Thursday 26 March 2026 04:48:02 +0000 (0:00:01.330) 0:00:01.330 ******** 2026-03-26 04:48:22.605951 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:48:22.605962 | orchestrator | ok: [testbed-node-1] 2026-03-26 04:48:22.605973 | orchestrator | ok: [testbed-node-2] 2026-03-26 04:48:22.605984 | orchestrator | 2026-03-26 04:48:22.605995 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-26 04:48:22.606006 | orchestrator | Thursday 26 March 2026 04:48:04 +0000 (0:00:02.378) 0:00:03.708 ******** 2026-03-26 04:48:22.606093 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-03-26 04:48:22.606107 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-03-26 04:48:22.606118 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-03-26 04:48:22.606129 | orchestrator | 2026-03-26 04:48:22.606151 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-03-26 04:48:22.606162 | orchestrator | 2026-03-26 04:48:22.606187 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-03-26 04:48:22.606198 | orchestrator | Thursday 26 March 2026 04:48:07 +0000 (0:00:02.678) 0:00:06.387 ******** 2026-03-26 04:48:22.606209 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-26 04:48:22.606220 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-26 04:48:22.606231 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-26 04:48:22.606241 | orchestrator | 2026-03-26 04:48:22.606252 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-26 04:48:22.606262 | orchestrator | Thursday 26 March 2026 04:48:08 +0000 (0:00:01.508) 0:00:07.895 ******** 2026-03-26 04:48:22.606274 | orchestrator | included: /ansible/roles/mariadb/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 04:48:22.606285 | orchestrator | 2026-03-26 04:48:22.606296 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-03-26 04:48:22.606307 | orchestrator | Thursday 26 March 2026 04:48:10 +0000 (0:00:01.753) 0:00:09.649 ******** 2026-03-26 04:48:22.606324 | orchestrator | ok: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-26 04:48:22.606367 | orchestrator | ok: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-26 04:48:22.606390 | orchestrator | ok: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-26 04:48:22.606402 | orchestrator | 2026-03-26 04:48:22.606414 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-03-26 04:48:22.606425 | orchestrator | Thursday 26 March 2026 04:48:14 +0000 (0:00:03.606) 0:00:13.255 ******** 2026-03-26 04:48:22.606435 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:48:22.606447 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:48:22.606458 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:48:22.606469 | orchestrator | 2026-03-26 04:48:22.606480 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-03-26 04:48:22.606491 | orchestrator | Thursday 26 March 2026 04:48:15 +0000 (0:00:01.606) 0:00:14.862 ******** 2026-03-26 04:48:22.606501 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:48:22.606512 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:48:22.606523 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:48:22.606534 | orchestrator | 2026-03-26 04:48:22.606545 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-03-26 04:48:22.606570 | orchestrator | Thursday 26 March 2026 04:48:18 +0000 (0:00:02.251) 0:00:17.116 ******** 2026-03-26 04:48:22.606608 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-26 04:48:34.883175 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-26 04:48:34.883311 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-26 04:48:34.883352 | orchestrator | 2026-03-26 04:48:34.883366 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-03-26 04:48:34.883378 | orchestrator | Thursday 26 March 2026 04:48:22 +0000 (0:00:04.504) 0:00:21.621 ******** 2026-03-26 04:48:34.883390 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:48:34.883402 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:48:34.883413 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:48:34.883424 | orchestrator | 2026-03-26 04:48:34.883436 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-03-26 04:48:34.883463 | orchestrator | Thursday 26 March 2026 04:48:24 +0000 (0:00:02.049) 0:00:23.671 ******** 2026-03-26 04:48:34.883475 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:48:34.883486 | orchestrator | ok: [testbed-node-1] 2026-03-26 04:48:34.883496 | orchestrator | ok: [testbed-node-2] 2026-03-26 04:48:34.883507 | orchestrator | 2026-03-26 04:48:34.883518 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-26 04:48:34.883529 | orchestrator | Thursday 26 March 2026 04:48:29 +0000 (0:00:04.773) 0:00:28.444 ******** 2026-03-26 04:48:34.883540 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 04:48:34.883551 | orchestrator | 2026-03-26 04:48:34.883562 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-03-26 04:48:34.883572 | orchestrator | Thursday 26 March 2026 04:48:31 +0000 (0:00:01.881) 0:00:30.326 ******** 2026-03-26 04:48:34.883585 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-26 04:48:34.883605 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:48:34.883630 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-26 04:48:42.553839 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:48:42.553940 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-26 04:48:42.553972 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:48:42.553979 | orchestrator | 2026-03-26 04:48:42.553986 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-03-26 04:48:42.553993 | orchestrator | Thursday 26 March 2026 04:48:34 +0000 (0:00:03.572) 0:00:33.899 ******** 2026-03-26 04:48:42.554070 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-26 04:48:42.554084 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:48:42.554106 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-26 04:48:42.554120 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:48:42.554130 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-26 04:48:42.554136 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:48:42.554142 | orchestrator | 2026-03-26 04:48:42.554148 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-03-26 04:48:42.554154 | orchestrator | Thursday 26 March 2026 04:48:38 +0000 (0:00:03.520) 0:00:37.420 ******** 2026-03-26 04:48:42.554166 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-26 04:48:46.704717 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:48:46.704828 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-26 04:48:46.704844 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:48:46.704854 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-26 04:48:46.704884 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:48:46.704892 | orchestrator | 2026-03-26 04:48:46.704901 | orchestrator | TASK [service-check-containers : mariadb | Check containers] ******************* 2026-03-26 04:48:46.704911 | orchestrator | Thursday 26 March 2026 04:48:42 +0000 (0:00:04.149) 0:00:41.569 ******** 2026-03-26 04:48:46.704936 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-26 04:48:46.704952 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-26 04:48:46.704975 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-26 04:49:02.393607 | orchestrator | 2026-03-26 04:49:02.393816 | orchestrator | TASK [service-check-containers : mariadb | Notify handlers to restart containers] *** 2026-03-26 04:49:02.393848 | orchestrator | Thursday 26 March 2026 04:48:46 +0000 (0:00:04.154) 0:00:45.723 ******** 2026-03-26 04:49:02.393866 | orchestrator | changed: [testbed-node-0] => { 2026-03-26 04:49:02.393879 | orchestrator |  "msg": "Notifying handlers" 2026-03-26 04:49:02.393890 | orchestrator | } 2026-03-26 04:49:02.393902 | orchestrator | changed: [testbed-node-1] => { 2026-03-26 04:49:02.393913 | orchestrator |  "msg": "Notifying handlers" 2026-03-26 04:49:02.393923 | orchestrator | } 2026-03-26 04:49:02.393934 | orchestrator | changed: [testbed-node-2] => { 2026-03-26 04:49:02.393945 | orchestrator |  "msg": "Notifying handlers" 2026-03-26 04:49:02.393955 | orchestrator | } 2026-03-26 04:49:02.393966 | orchestrator | 2026-03-26 04:49:02.393994 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-26 04:49:02.394005 | orchestrator | Thursday 26 March 2026 04:48:48 +0000 (0:00:01.392) 0:00:47.116 ******** 2026-03-26 04:49:02.394084 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-26 04:49:02.394128 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:49:02.394166 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-26 04:49:02.394183 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:49:02.394204 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-26 04:49:02.394226 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:49:02.394240 | orchestrator | 2026-03-26 04:49:02.394253 | orchestrator | TASK [mariadb : Checking for mariadb cluster] ********************************** 2026-03-26 04:49:02.394265 | orchestrator | Thursday 26 March 2026 04:48:52 +0000 (0:00:04.186) 0:00:51.302 ******** 2026-03-26 04:49:02.394278 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:49:02.394290 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:49:02.394303 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:49:02.394315 | orchestrator | 2026-03-26 04:49:02.394328 | orchestrator | TASK [mariadb : Cleaning up temp file on localhost] **************************** 2026-03-26 04:49:02.394342 | orchestrator | Thursday 26 March 2026 04:48:53 +0000 (0:00:01.400) 0:00:52.703 ******** 2026-03-26 04:49:02.394355 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:49:02.394368 | orchestrator | 2026-03-26 04:49:02.394380 | orchestrator | TASK [mariadb : Stop MariaDB containers] *************************************** 2026-03-26 04:49:02.394393 | orchestrator | Thursday 26 March 2026 04:48:54 +0000 (0:00:01.131) 0:00:53.835 ******** 2026-03-26 04:49:02.394405 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:49:02.394418 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:49:02.394430 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:49:02.394442 | orchestrator | 2026-03-26 04:49:02.394455 | orchestrator | TASK [mariadb : Run MariaDB wsrep recovery] ************************************ 2026-03-26 04:49:02.394467 | orchestrator | Thursday 26 March 2026 04:48:56 +0000 (0:00:01.391) 0:00:55.226 ******** 2026-03-26 04:49:02.394480 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:49:02.394491 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:49:02.394502 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:49:02.394512 | orchestrator | 2026-03-26 04:49:02.394523 | orchestrator | TASK [mariadb : Copying MariaDB log file to /tmp] ****************************** 2026-03-26 04:49:02.394534 | orchestrator | Thursday 26 March 2026 04:48:57 +0000 (0:00:01.536) 0:00:56.763 ******** 2026-03-26 04:49:02.394545 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:49:02.394556 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:49:02.394566 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:49:02.394577 | orchestrator | 2026-03-26 04:49:02.394588 | orchestrator | TASK [mariadb : Get MariaDB wsrep recovery seqno] ****************************** 2026-03-26 04:49:02.394598 | orchestrator | Thursday 26 March 2026 04:48:59 +0000 (0:00:01.358) 0:00:58.122 ******** 2026-03-26 04:49:02.394609 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:49:02.394620 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:49:02.394630 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:49:02.394641 | orchestrator | 2026-03-26 04:49:02.394652 | orchestrator | TASK [mariadb : Removing MariaDB log file from /tmp] *************************** 2026-03-26 04:49:02.394695 | orchestrator | Thursday 26 March 2026 04:49:01 +0000 (0:00:01.927) 0:01:00.050 ******** 2026-03-26 04:49:02.394715 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:49:02.394736 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:49:02.394756 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:49:02.394785 | orchestrator | 2026-03-26 04:49:02.394814 | orchestrator | TASK [mariadb : Registering MariaDB seqno variable] **************************** 2026-03-26 04:49:20.281074 | orchestrator | Thursday 26 March 2026 04:49:02 +0000 (0:00:01.357) 0:01:01.407 ******** 2026-03-26 04:49:20.281198 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:49:20.281213 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:49:20.281233 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:49:20.281243 | orchestrator | 2026-03-26 04:49:20.281254 | orchestrator | TASK [mariadb : Comparing seqno value on all mariadb hosts] ******************** 2026-03-26 04:49:20.281264 | orchestrator | Thursday 26 March 2026 04:49:03 +0000 (0:00:01.545) 0:01:02.953 ******** 2026-03-26 04:49:20.281274 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-26 04:49:20.281298 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-26 04:49:20.281308 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-26 04:49:20.281318 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:49:20.281327 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-26 04:49:20.281336 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-26 04:49:20.281346 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-26 04:49:20.281355 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:49:20.281364 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-26 04:49:20.281373 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-26 04:49:20.281383 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-26 04:49:20.281393 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:49:20.281402 | orchestrator | 2026-03-26 04:49:20.281412 | orchestrator | TASK [mariadb : Writing hostname of host with the largest seqno to temp file] *** 2026-03-26 04:49:20.281421 | orchestrator | Thursday 26 March 2026 04:49:05 +0000 (0:00:01.507) 0:01:04.460 ******** 2026-03-26 04:49:20.281430 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:49:20.281440 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:49:20.281449 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:49:20.281458 | orchestrator | 2026-03-26 04:49:20.281468 | orchestrator | TASK [mariadb : Registering mariadb_recover_inventory_name from temp file] ***** 2026-03-26 04:49:20.281477 | orchestrator | Thursday 26 March 2026 04:49:06 +0000 (0:00:01.368) 0:01:05.828 ******** 2026-03-26 04:49:20.281487 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:49:20.281496 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:49:20.281505 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:49:20.281514 | orchestrator | 2026-03-26 04:49:20.281524 | orchestrator | TASK [mariadb : Store bootstrap and master hostnames into facts] *************** 2026-03-26 04:49:20.281533 | orchestrator | Thursday 26 March 2026 04:49:08 +0000 (0:00:01.405) 0:01:07.234 ******** 2026-03-26 04:49:20.281543 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:49:20.281552 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:49:20.281562 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:49:20.281571 | orchestrator | 2026-03-26 04:49:20.281580 | orchestrator | TASK [mariadb : Set grastate.dat file from MariaDB container in bootstrap host] *** 2026-03-26 04:49:20.281590 | orchestrator | Thursday 26 March 2026 04:49:09 +0000 (0:00:01.336) 0:01:08.571 ******** 2026-03-26 04:49:20.281600 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:49:20.281609 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:49:20.281620 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:49:20.281631 | orchestrator | 2026-03-26 04:49:20.281642 | orchestrator | TASK [mariadb : Starting first MariaDB container] ****************************** 2026-03-26 04:49:20.281677 | orchestrator | Thursday 26 March 2026 04:49:10 +0000 (0:00:01.377) 0:01:09.949 ******** 2026-03-26 04:49:20.281695 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:49:20.281714 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:49:20.281733 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:49:20.281750 | orchestrator | 2026-03-26 04:49:20.281761 | orchestrator | TASK [mariadb : Wait for first MariaDB container] ****************************** 2026-03-26 04:49:20.281796 | orchestrator | Thursday 26 March 2026 04:49:12 +0000 (0:00:01.332) 0:01:11.281 ******** 2026-03-26 04:49:20.281807 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:49:20.281818 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:49:20.281828 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:49:20.281839 | orchestrator | 2026-03-26 04:49:20.281850 | orchestrator | TASK [mariadb : Set first MariaDB container as primary] ************************ 2026-03-26 04:49:20.281861 | orchestrator | Thursday 26 March 2026 04:49:13 +0000 (0:00:01.568) 0:01:12.850 ******** 2026-03-26 04:49:20.281871 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:49:20.281882 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:49:20.281893 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:49:20.281904 | orchestrator | 2026-03-26 04:49:20.281915 | orchestrator | TASK [mariadb : Wait for MariaDB to become operational] ************************ 2026-03-26 04:49:20.281925 | orchestrator | Thursday 26 March 2026 04:49:15 +0000 (0:00:01.475) 0:01:14.325 ******** 2026-03-26 04:49:20.281936 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:49:20.281947 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:49:20.281957 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:49:20.281968 | orchestrator | 2026-03-26 04:49:20.281977 | orchestrator | TASK [mariadb : Restart slave MariaDB container(s)] **************************** 2026-03-26 04:49:20.281987 | orchestrator | Thursday 26 March 2026 04:49:16 +0000 (0:00:01.400) 0:01:15.726 ******** 2026-03-26 04:49:20.282078 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-26 04:49:20.282097 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:49:20.282108 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-26 04:49:20.282127 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:49:20.282151 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-26 04:49:36.960267 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:49:36.960381 | orchestrator | 2026-03-26 04:49:36.960398 | orchestrator | TASK [mariadb : Wait for slave MariaDB] **************************************** 2026-03-26 04:49:36.960411 | orchestrator | Thursday 26 March 2026 04:49:20 +0000 (0:00:03.569) 0:01:19.295 ******** 2026-03-26 04:49:36.960422 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:49:36.960433 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:49:36.960444 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:49:36.960454 | orchestrator | 2026-03-26 04:49:36.960465 | orchestrator | TASK [mariadb : Restart master MariaDB container(s)] *************************** 2026-03-26 04:49:36.960476 | orchestrator | Thursday 26 March 2026 04:49:21 +0000 (0:00:01.582) 0:01:20.878 ******** 2026-03-26 04:49:36.960491 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-26 04:49:36.960528 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:49:36.960574 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-26 04:49:36.960588 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:49:36.960600 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-26 04:49:36.960623 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:49:36.960634 | orchestrator | 2026-03-26 04:49:36.960811 | orchestrator | TASK [mariadb : Wait for master mariadb] *************************************** 2026-03-26 04:49:36.960843 | orchestrator | Thursday 26 March 2026 04:49:25 +0000 (0:00:03.409) 0:01:24.287 ******** 2026-03-26 04:49:36.960855 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:49:36.960868 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:49:36.960881 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:49:36.960893 | orchestrator | 2026-03-26 04:49:36.960905 | orchestrator | TASK [service-check : mariadb | Get container facts] *************************** 2026-03-26 04:49:36.960918 | orchestrator | Thursday 26 March 2026 04:49:26 +0000 (0:00:01.730) 0:01:26.018 ******** 2026-03-26 04:49:36.960930 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:49:36.960942 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:49:36.960954 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:49:36.960967 | orchestrator | 2026-03-26 04:49:36.960979 | orchestrator | TASK [service-check : mariadb | Fail if containers are missing or not running] *** 2026-03-26 04:49:36.960993 | orchestrator | Thursday 26 March 2026 04:49:28 +0000 (0:00:01.362) 0:01:27.380 ******** 2026-03-26 04:49:36.961005 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:49:36.961017 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:49:36.961029 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:49:36.961041 | orchestrator | 2026-03-26 04:49:36.961054 | orchestrator | TASK [service-check : mariadb | Fail if containers are unhealthy] ************** 2026-03-26 04:49:36.961066 | orchestrator | Thursday 26 March 2026 04:49:29 +0000 (0:00:01.330) 0:01:28.711 ******** 2026-03-26 04:49:36.961078 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:49:36.961090 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:49:36.961102 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:49:36.961114 | orchestrator | 2026-03-26 04:49:36.961126 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-03-26 04:49:36.961138 | orchestrator | Thursday 26 March 2026 04:49:31 +0000 (0:00:01.801) 0:01:30.512 ******** 2026-03-26 04:49:36.961150 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:49:36.961162 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:49:36.961173 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:49:36.961184 | orchestrator | 2026-03-26 04:49:36.961194 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-03-26 04:49:36.961204 | orchestrator | Thursday 26 March 2026 04:49:33 +0000 (0:00:01.951) 0:01:32.464 ******** 2026-03-26 04:49:36.961240 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:49:36.961251 | orchestrator | ok: [testbed-node-1] 2026-03-26 04:49:36.961262 | orchestrator | ok: [testbed-node-2] 2026-03-26 04:49:36.961272 | orchestrator | 2026-03-26 04:49:36.961290 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-03-26 04:49:36.961301 | orchestrator | Thursday 26 March 2026 04:49:35 +0000 (0:00:01.959) 0:01:34.423 ******** 2026-03-26 04:49:36.961312 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:49:36.961322 | orchestrator | ok: [testbed-node-1] 2026-03-26 04:49:36.961333 | orchestrator | ok: [testbed-node-2] 2026-03-26 04:49:36.961344 | orchestrator | 2026-03-26 04:49:36.961355 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-03-26 04:49:36.961366 | orchestrator | Thursday 26 March 2026 04:49:36 +0000 (0:00:01.336) 0:01:35.760 ******** 2026-03-26 04:49:36.961389 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:52:13.202167 | orchestrator | ok: [testbed-node-1] 2026-03-26 04:52:13.202286 | orchestrator | ok: [testbed-node-2] 2026-03-26 04:52:13.202301 | orchestrator | 2026-03-26 04:52:13.202313 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-03-26 04:52:13.202326 | orchestrator | Thursday 26 March 2026 04:49:38 +0000 (0:00:01.387) 0:01:37.147 ******** 2026-03-26 04:52:13.202337 | orchestrator | ok: [testbed-node-1] 2026-03-26 04:52:13.202347 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:52:13.202358 | orchestrator | ok: [testbed-node-2] 2026-03-26 04:52:13.202369 | orchestrator | 2026-03-26 04:52:13.202379 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-03-26 04:52:13.202390 | orchestrator | Thursday 26 March 2026 04:49:40 +0000 (0:00:02.028) 0:01:39.176 ******** 2026-03-26 04:52:13.202401 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:52:13.202412 | orchestrator | ok: [testbed-node-1] 2026-03-26 04:52:13.202422 | orchestrator | ok: [testbed-node-2] 2026-03-26 04:52:13.202433 | orchestrator | 2026-03-26 04:52:13.202444 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-03-26 04:52:13.202455 | orchestrator | Thursday 26 March 2026 04:49:41 +0000 (0:00:01.443) 0:01:40.619 ******** 2026-03-26 04:52:13.202466 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:52:13.202477 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:52:13.202488 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:52:13.202499 | orchestrator | 2026-03-26 04:52:13.202509 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-03-26 04:52:13.202520 | orchestrator | Thursday 26 March 2026 04:49:42 +0000 (0:00:01.379) 0:01:41.999 ******** 2026-03-26 04:52:13.202531 | orchestrator | ok: [testbed-node-1] 2026-03-26 04:52:13.202541 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:52:13.202552 | orchestrator | ok: [testbed-node-2] 2026-03-26 04:52:13.202563 | orchestrator | 2026-03-26 04:52:13.202573 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-03-26 04:52:13.202584 | orchestrator | Thursday 26 March 2026 04:49:46 +0000 (0:00:03.524) 0:01:45.523 ******** 2026-03-26 04:52:13.202621 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:52:13.202638 | orchestrator | ok: [testbed-node-1] 2026-03-26 04:52:13.202658 | orchestrator | ok: [testbed-node-2] 2026-03-26 04:52:13.202678 | orchestrator | 2026-03-26 04:52:13.202697 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-03-26 04:52:13.202716 | orchestrator | Thursday 26 March 2026 04:49:47 +0000 (0:00:01.398) 0:01:46.921 ******** 2026-03-26 04:52:13.202736 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:52:13.202756 | orchestrator | ok: [testbed-node-1] 2026-03-26 04:52:13.202776 | orchestrator | ok: [testbed-node-2] 2026-03-26 04:52:13.202795 | orchestrator | 2026-03-26 04:52:13.202814 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-03-26 04:52:13.202836 | orchestrator | Thursday 26 March 2026 04:49:49 +0000 (0:00:01.455) 0:01:48.377 ******** 2026-03-26 04:52:13.202857 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:52:13.202877 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:52:13.202899 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:52:13.202946 | orchestrator | 2026-03-26 04:52:13.202960 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-26 04:52:13.202973 | orchestrator | Thursday 26 March 2026 04:49:51 +0000 (0:00:01.793) 0:01:50.171 ******** 2026-03-26 04:52:13.202985 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:52:13.202998 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:52:13.203010 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:52:13.203020 | orchestrator | 2026-03-26 04:52:13.203031 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-26 04:52:13.203041 | orchestrator | Thursday 26 March 2026 04:49:52 +0000 (0:00:01.537) 0:01:51.709 ******** 2026-03-26 04:52:13.203052 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:52:13.203062 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:52:13.203073 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:52:13.203084 | orchestrator | 2026-03-26 04:52:13.203094 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-03-26 04:52:13.203105 | orchestrator | Thursday 26 March 2026 04:49:54 +0000 (0:00:01.619) 0:01:53.329 ******** 2026-03-26 04:52:13.203116 | orchestrator | changed: [testbed-node-0] 2026-03-26 04:52:13.203126 | orchestrator | changed: [testbed-node-1] 2026-03-26 04:52:13.203137 | orchestrator | changed: [testbed-node-2] 2026-03-26 04:52:13.203147 | orchestrator | 2026-03-26 04:52:13.203158 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-03-26 04:52:13.203169 | orchestrator | Thursday 26 March 2026 04:49:55 +0000 (0:00:01.685) 0:01:55.014 ******** 2026-03-26 04:52:13.203179 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:52:13.203190 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:52:13.203200 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:52:13.203211 | orchestrator | 2026-03-26 04:52:13.203222 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-03-26 04:52:13.203232 | orchestrator | 2026-03-26 04:52:13.203243 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-26 04:52:13.203253 | orchestrator | Thursday 26 March 2026 04:49:58 +0000 (0:00:02.231) 0:01:57.246 ******** 2026-03-26 04:52:13.203264 | orchestrator | changed: [testbed-node-0] 2026-03-26 04:52:13.203274 | orchestrator | 2026-03-26 04:52:13.203285 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-26 04:52:13.203295 | orchestrator | Thursday 26 March 2026 04:50:24 +0000 (0:00:26.258) 0:02:23.505 ******** 2026-03-26 04:52:13.203306 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:52:13.203317 | orchestrator | 2026-03-26 04:52:13.203327 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-26 04:52:13.203339 | orchestrator | Thursday 26 March 2026 04:50:29 +0000 (0:00:04.617) 0:02:28.123 ******** 2026-03-26 04:52:13.203364 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:52:13.203375 | orchestrator | 2026-03-26 04:52:13.203386 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-03-26 04:52:13.203396 | orchestrator | 2026-03-26 04:52:13.203407 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-26 04:52:13.203418 | orchestrator | Thursday 26 March 2026 04:50:32 +0000 (0:00:02.976) 0:02:31.099 ******** 2026-03-26 04:52:13.203428 | orchestrator | changed: [testbed-node-1] 2026-03-26 04:52:13.203439 | orchestrator | 2026-03-26 04:52:13.203450 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-26 04:52:13.203480 | orchestrator | Thursday 26 March 2026 04:50:58 +0000 (0:00:26.022) 0:02:57.122 ******** 2026-03-26 04:52:13.203491 | orchestrator | ok: [testbed-node-1] 2026-03-26 04:52:13.203502 | orchestrator | 2026-03-26 04:52:13.203512 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-26 04:52:13.203523 | orchestrator | Thursday 26 March 2026 04:51:02 +0000 (0:00:04.564) 0:03:01.686 ******** 2026-03-26 04:52:13.203533 | orchestrator | ok: [testbed-node-1] 2026-03-26 04:52:13.203544 | orchestrator | 2026-03-26 04:52:13.203555 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-03-26 04:52:13.203573 | orchestrator | 2026-03-26 04:52:13.203584 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-26 04:52:13.203645 | orchestrator | Thursday 26 March 2026 04:51:05 +0000 (0:00:02.870) 0:03:04.557 ******** 2026-03-26 04:52:13.203658 | orchestrator | changed: [testbed-node-2] 2026-03-26 04:52:13.203669 | orchestrator | 2026-03-26 04:52:13.203680 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-26 04:52:13.203690 | orchestrator | Thursday 26 March 2026 04:51:30 +0000 (0:00:25.220) 0:03:29.777 ******** 2026-03-26 04:52:13.203701 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Wait for MariaDB service port liveness (10 retries left). 2026-03-26 04:52:13.203713 | orchestrator | ok: [testbed-node-2] 2026-03-26 04:52:13.203723 | orchestrator | 2026-03-26 04:52:13.203734 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-26 04:52:13.203745 | orchestrator | Thursday 26 March 2026 04:51:38 +0000 (0:00:07.967) 0:03:37.745 ******** 2026-03-26 04:52:13.203755 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2026-03-26 04:52:13.203766 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-03-26 04:52:13.203776 | orchestrator | mariadb_bootstrap_restart 2026-03-26 04:52:13.203787 | orchestrator | ok: [testbed-node-2] 2026-03-26 04:52:13.203798 | orchestrator | 2026-03-26 04:52:13.203808 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-03-26 04:52:13.203819 | orchestrator | skipping: no hosts matched 2026-03-26 04:52:13.203830 | orchestrator | 2026-03-26 04:52:13.203840 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-03-26 04:52:13.203860 | orchestrator | skipping: no hosts matched 2026-03-26 04:52:13.203880 | orchestrator | 2026-03-26 04:52:13.203900 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-03-26 04:52:13.203919 | orchestrator | 2026-03-26 04:52:13.203938 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-03-26 04:52:13.203957 | orchestrator | Thursday 26 March 2026 04:51:42 +0000 (0:00:03.948) 0:03:41.693 ******** 2026-03-26 04:52:13.203976 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 04:52:13.203996 | orchestrator | 2026-03-26 04:52:13.204015 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-03-26 04:52:13.204037 | orchestrator | Thursday 26 March 2026 04:51:44 +0000 (0:00:01.988) 0:03:43.681 ******** 2026-03-26 04:52:13.204056 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:52:13.204075 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:52:13.204086 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:52:13.204097 | orchestrator | 2026-03-26 04:52:13.204107 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-03-26 04:52:13.204118 | orchestrator | Thursday 26 March 2026 04:51:47 +0000 (0:00:03.098) 0:03:46.780 ******** 2026-03-26 04:52:13.204129 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:52:13.204139 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:52:13.204150 | orchestrator | changed: [testbed-node-0] 2026-03-26 04:52:13.204160 | orchestrator | 2026-03-26 04:52:13.204171 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-03-26 04:52:13.204181 | orchestrator | Thursday 26 March 2026 04:51:51 +0000 (0:00:03.287) 0:03:50.067 ******** 2026-03-26 04:52:13.204192 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:52:13.204202 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:52:13.204213 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:52:13.204223 | orchestrator | 2026-03-26 04:52:13.204234 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-03-26 04:52:13.204244 | orchestrator | Thursday 26 March 2026 04:51:54 +0000 (0:00:03.181) 0:03:53.248 ******** 2026-03-26 04:52:13.204255 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:52:13.204265 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:52:13.204276 | orchestrator | changed: [testbed-node-0] 2026-03-26 04:52:13.204295 | orchestrator | 2026-03-26 04:52:13.204306 | orchestrator | TASK [service-check : mariadb | Get container facts] *************************** 2026-03-26 04:52:13.204317 | orchestrator | Thursday 26 March 2026 04:51:57 +0000 (0:00:03.482) 0:03:56.731 ******** 2026-03-26 04:52:13.204327 | orchestrator | ok: [testbed-node-1] 2026-03-26 04:52:13.204338 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:52:13.204348 | orchestrator | ok: [testbed-node-2] 2026-03-26 04:52:13.204359 | orchestrator | 2026-03-26 04:52:13.204369 | orchestrator | TASK [service-check : mariadb | Fail if containers are missing or not running] *** 2026-03-26 04:52:13.204380 | orchestrator | Thursday 26 March 2026 04:52:04 +0000 (0:00:06.649) 0:04:03.381 ******** 2026-03-26 04:52:13.204390 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:52:13.204401 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:52:13.204412 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:52:13.204422 | orchestrator | 2026-03-26 04:52:13.204433 | orchestrator | TASK [service-check : mariadb | Fail if containers are unhealthy] ************** 2026-03-26 04:52:13.204443 | orchestrator | Thursday 26 March 2026 04:52:08 +0000 (0:00:03.812) 0:04:07.193 ******** 2026-03-26 04:52:13.204454 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:52:13.204473 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:52:13.204484 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:52:13.204495 | orchestrator | 2026-03-26 04:52:13.204505 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-03-26 04:52:13.204516 | orchestrator | Thursday 26 March 2026 04:52:09 +0000 (0:00:01.604) 0:04:08.798 ******** 2026-03-26 04:52:13.204526 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:52:13.204537 | orchestrator | ok: [testbed-node-1] 2026-03-26 04:52:13.204547 | orchestrator | ok: [testbed-node-2] 2026-03-26 04:52:13.204558 | orchestrator | 2026-03-26 04:52:13.204568 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-03-26 04:52:13.204588 | orchestrator | Thursday 26 March 2026 04:52:13 +0000 (0:00:03.415) 0:04:12.214 ******** 2026-03-26 04:52:33.369804 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 04:52:33.369919 | orchestrator | 2026-03-26 04:52:33.369936 | orchestrator | TASK [mariadb : Run upgrade in MariaDB container] ****************************** 2026-03-26 04:52:33.369949 | orchestrator | Thursday 26 March 2026 04:52:15 +0000 (0:00:01.996) 0:04:14.210 ******** 2026-03-26 04:52:33.369960 | orchestrator | changed: [testbed-node-0] 2026-03-26 04:52:33.369972 | orchestrator | changed: [testbed-node-2] 2026-03-26 04:52:33.369983 | orchestrator | changed: [testbed-node-1] 2026-03-26 04:52:33.369994 | orchestrator | 2026-03-26 04:52:33.370005 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-26 04:52:33.370078 | orchestrator | testbed-node-0 : ok=34  changed=8  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-03-26 04:52:33.370093 | orchestrator | testbed-node-1 : ok=26  changed=6  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-03-26 04:52:33.370105 | orchestrator | testbed-node-2 : ok=26  changed=6  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-03-26 04:52:33.370115 | orchestrator | 2026-03-26 04:52:33.370126 | orchestrator | 2026-03-26 04:52:33.370137 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-26 04:52:33.370148 | orchestrator | Thursday 26 March 2026 04:52:32 +0000 (0:00:17.715) 0:04:31.925 ******** 2026-03-26 04:52:33.370158 | orchestrator | =============================================================================== 2026-03-26 04:52:33.370169 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 77.50s 2026-03-26 04:52:33.370180 | orchestrator | mariadb : Run upgrade in MariaDB container ----------------------------- 17.72s 2026-03-26 04:52:33.370190 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 17.15s 2026-03-26 04:52:33.370201 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 9.79s 2026-03-26 04:52:33.370236 | orchestrator | service-check : mariadb | Get container facts --------------------------- 6.65s 2026-03-26 04:52:33.370250 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 4.77s 2026-03-26 04:52:33.370262 | orchestrator | mariadb : Copying over config.json files for services ------------------- 4.51s 2026-03-26 04:52:33.370274 | orchestrator | service-check-containers : Include tasks -------------------------------- 4.19s 2026-03-26 04:52:33.370287 | orchestrator | service-check-containers : mariadb | Check containers ------------------- 4.15s 2026-03-26 04:52:33.370299 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 4.15s 2026-03-26 04:52:33.370311 | orchestrator | service-check : mariadb | Fail if containers are missing or not running --- 3.81s 2026-03-26 04:52:33.370323 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.61s 2026-03-26 04:52:33.370335 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 3.57s 2026-03-26 04:52:33.370347 | orchestrator | mariadb : Restart slave MariaDB container(s) ---------------------------- 3.57s 2026-03-26 04:52:33.370359 | orchestrator | mariadb : Check MariaDB service WSREP sync status ----------------------- 3.52s 2026-03-26 04:52:33.370371 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 3.52s 2026-03-26 04:52:33.370384 | orchestrator | mariadb : Granting permissions on Mariabackup database to backup user --- 3.48s 2026-03-26 04:52:33.370396 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 3.42s 2026-03-26 04:52:33.370408 | orchestrator | mariadb : Restart master MariaDB container(s) --------------------------- 3.41s 2026-03-26 04:52:33.370421 | orchestrator | mariadb : Creating mysql monitor user ----------------------------------- 3.29s 2026-03-26 04:52:33.681994 | orchestrator | + osism apply -a upgrade rabbitmq 2026-03-26 04:52:35.780366 | orchestrator | 2026-03-26 04:52:35 | INFO  | Task 2e3e4e19-e6c3-4dbe-b118-c2b62a5d823d (rabbitmq) was prepared for execution. 2026-03-26 04:52:35.780466 | orchestrator | 2026-03-26 04:52:35 | INFO  | It takes a moment until task 2e3e4e19-e6c3-4dbe-b118-c2b62a5d823d (rabbitmq) has been started and output is visible here. 2026-03-26 04:53:18.782623 | orchestrator | 2026-03-26 04:53:18.782758 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-26 04:53:18.782789 | orchestrator | 2026-03-26 04:53:18.782810 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-26 04:53:18.782830 | orchestrator | Thursday 26 March 2026 04:52:41 +0000 (0:00:01.444) 0:00:01.444 ******** 2026-03-26 04:53:18.782850 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:53:18.782870 | orchestrator | ok: [testbed-node-1] 2026-03-26 04:53:18.782890 | orchestrator | ok: [testbed-node-2] 2026-03-26 04:53:18.782908 | orchestrator | 2026-03-26 04:53:18.782928 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-26 04:53:18.782947 | orchestrator | Thursday 26 March 2026 04:52:43 +0000 (0:00:01.871) 0:00:03.315 ******** 2026-03-26 04:53:18.782987 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-03-26 04:53:18.783008 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-03-26 04:53:18.783029 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-03-26 04:53:18.783041 | orchestrator | 2026-03-26 04:53:18.783052 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-03-26 04:53:18.783063 | orchestrator | 2026-03-26 04:53:18.783074 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-26 04:53:18.783085 | orchestrator | Thursday 26 March 2026 04:52:45 +0000 (0:00:01.751) 0:00:05.067 ******** 2026-03-26 04:53:18.783096 | orchestrator | included: /ansible/roles/rabbitmq/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 04:53:18.783108 | orchestrator | 2026-03-26 04:53:18.783119 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-03-26 04:53:18.783130 | orchestrator | Thursday 26 March 2026 04:52:47 +0000 (0:00:02.804) 0:00:07.871 ******** 2026-03-26 04:53:18.783162 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:53:18.783174 | orchestrator | 2026-03-26 04:53:18.783185 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-03-26 04:53:18.783196 | orchestrator | Thursday 26 March 2026 04:52:50 +0000 (0:00:02.382) 0:00:10.254 ******** 2026-03-26 04:53:18.783207 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:53:18.783218 | orchestrator | 2026-03-26 04:53:18.783228 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-03-26 04:53:18.783239 | orchestrator | Thursday 26 March 2026 04:52:53 +0000 (0:00:03.398) 0:00:13.653 ******** 2026-03-26 04:53:18.783251 | orchestrator | changed: [testbed-node-0] 2026-03-26 04:53:18.783263 | orchestrator | 2026-03-26 04:53:18.783274 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-03-26 04:53:18.783284 | orchestrator | Thursday 26 March 2026 04:53:02 +0000 (0:00:09.086) 0:00:22.740 ******** 2026-03-26 04:53:18.783296 | orchestrator | ok: [testbed-node-0] => { 2026-03-26 04:53:18.783314 | orchestrator |  "changed": false, 2026-03-26 04:53:18.783332 | orchestrator |  "msg": "All assertions passed" 2026-03-26 04:53:18.783350 | orchestrator | } 2026-03-26 04:53:18.783367 | orchestrator | 2026-03-26 04:53:18.783386 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-03-26 04:53:18.783406 | orchestrator | Thursday 26 March 2026 04:53:04 +0000 (0:00:01.301) 0:00:24.041 ******** 2026-03-26 04:53:18.783424 | orchestrator | ok: [testbed-node-0] => { 2026-03-26 04:53:18.783443 | orchestrator |  "changed": false, 2026-03-26 04:53:18.783454 | orchestrator |  "msg": "All assertions passed" 2026-03-26 04:53:18.783464 | orchestrator | } 2026-03-26 04:53:18.783475 | orchestrator | 2026-03-26 04:53:18.783486 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-26 04:53:18.783497 | orchestrator | Thursday 26 March 2026 04:53:05 +0000 (0:00:01.725) 0:00:25.767 ******** 2026-03-26 04:53:18.783508 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 04:53:18.783519 | orchestrator | 2026-03-26 04:53:18.783529 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-03-26 04:53:18.783540 | orchestrator | Thursday 26 March 2026 04:53:07 +0000 (0:00:01.906) 0:00:27.674 ******** 2026-03-26 04:53:18.783551 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:53:18.783561 | orchestrator | 2026-03-26 04:53:18.783572 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-03-26 04:53:18.783660 | orchestrator | Thursday 26 March 2026 04:53:09 +0000 (0:00:02.306) 0:00:29.981 ******** 2026-03-26 04:53:18.783672 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:53:18.783682 | orchestrator | 2026-03-26 04:53:18.783693 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-03-26 04:53:18.783704 | orchestrator | Thursday 26 March 2026 04:53:12 +0000 (0:00:02.750) 0:00:32.731 ******** 2026-03-26 04:53:18.783714 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:53:18.783725 | orchestrator | 2026-03-26 04:53:18.783735 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-03-26 04:53:18.783749 | orchestrator | Thursday 26 March 2026 04:53:14 +0000 (0:00:01.877) 0:00:34.609 ******** 2026-03-26 04:53:18.783808 | orchestrator | ok: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-26 04:53:18.783855 | orchestrator | ok: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-26 04:53:18.783869 | orchestrator | ok: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-26 04:53:18.783881 | orchestrator | 2026-03-26 04:53:18.783892 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-03-26 04:53:18.783903 | orchestrator | Thursday 26 March 2026 04:53:16 +0000 (0:00:01.771) 0:00:36.380 ******** 2026-03-26 04:53:18.783915 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-26 04:53:18.783943 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-26 04:53:38.215986 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-26 04:53:38.216108 | orchestrator | 2026-03-26 04:53:38.216125 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-03-26 04:53:38.216139 | orchestrator | Thursday 26 March 2026 04:53:18 +0000 (0:00:02.399) 0:00:38.780 ******** 2026-03-26 04:53:38.216150 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-26 04:53:38.216163 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-26 04:53:38.216173 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-26 04:53:38.216184 | orchestrator | 2026-03-26 04:53:38.216195 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-03-26 04:53:38.216205 | orchestrator | Thursday 26 March 2026 04:53:21 +0000 (0:00:02.454) 0:00:41.235 ******** 2026-03-26 04:53:38.216216 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-26 04:53:38.216227 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-26 04:53:38.216238 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-26 04:53:38.216248 | orchestrator | 2026-03-26 04:53:38.216259 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-03-26 04:53:38.216269 | orchestrator | Thursday 26 March 2026 04:53:24 +0000 (0:00:03.109) 0:00:44.344 ******** 2026-03-26 04:53:38.216280 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-26 04:53:38.216291 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-26 04:53:38.216301 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-26 04:53:38.216312 | orchestrator | 2026-03-26 04:53:38.216322 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-03-26 04:53:38.216333 | orchestrator | Thursday 26 March 2026 04:53:26 +0000 (0:00:02.426) 0:00:46.770 ******** 2026-03-26 04:53:38.216343 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-26 04:53:38.216379 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-26 04:53:38.216391 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-26 04:53:38.216402 | orchestrator | 2026-03-26 04:53:38.216412 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-03-26 04:53:38.216422 | orchestrator | Thursday 26 March 2026 04:53:29 +0000 (0:00:02.314) 0:00:49.085 ******** 2026-03-26 04:53:38.216433 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-26 04:53:38.216443 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-26 04:53:38.216454 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-26 04:53:38.216464 | orchestrator | 2026-03-26 04:53:38.216474 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-03-26 04:53:38.216485 | orchestrator | Thursday 26 March 2026 04:53:31 +0000 (0:00:02.290) 0:00:51.376 ******** 2026-03-26 04:53:38.216495 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-26 04:53:38.216506 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-26 04:53:38.216517 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-26 04:53:38.216529 | orchestrator | 2026-03-26 04:53:38.216542 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-26 04:53:38.216554 | orchestrator | Thursday 26 March 2026 04:53:33 +0000 (0:00:02.591) 0:00:53.968 ******** 2026-03-26 04:53:38.216609 | orchestrator | included: /ansible/roles/rabbitmq/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 04:53:38.216630 | orchestrator | 2026-03-26 04:53:38.216669 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over extra CA certificates] ******* 2026-03-26 04:53:38.216689 | orchestrator | Thursday 26 March 2026 04:53:35 +0000 (0:00:01.793) 0:00:55.761 ******** 2026-03-26 04:53:38.216712 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-26 04:53:38.216734 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-26 04:53:38.216760 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-26 04:53:38.216773 | orchestrator | 2026-03-26 04:53:38.216786 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over backend internal TLS certificate] *** 2026-03-26 04:53:38.216799 | orchestrator | Thursday 26 March 2026 04:53:38 +0000 (0:00:02.341) 0:00:58.103 ******** 2026-03-26 04:53:38.216828 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-26 04:53:47.226190 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-26 04:53:47.226284 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:53:47.226296 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:53:47.226305 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-26 04:53:47.226333 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:53:47.226341 | orchestrator | 2026-03-26 04:53:47.226350 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over backend internal TLS key] **** 2026-03-26 04:53:47.226358 | orchestrator | Thursday 26 March 2026 04:53:39 +0000 (0:00:01.452) 0:00:59.555 ******** 2026-03-26 04:53:47.226379 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-26 04:53:47.226387 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:53:47.226408 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-26 04:53:47.226417 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:53:47.226425 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-26 04:53:47.226450 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:53:47.226458 | orchestrator | 2026-03-26 04:53:47.226465 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-03-26 04:53:47.226473 | orchestrator | Thursday 26 March 2026 04:53:41 +0000 (0:00:01.778) 0:01:01.334 ******** 2026-03-26 04:53:47.226480 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:53:47.226489 | orchestrator | ok: [testbed-node-1] 2026-03-26 04:53:47.226496 | orchestrator | ok: [testbed-node-2] 2026-03-26 04:53:47.226503 | orchestrator | 2026-03-26 04:53:47.226510 | orchestrator | TASK [service-check-containers : rabbitmq | Check containers] ****************** 2026-03-26 04:53:47.226517 | orchestrator | Thursday 26 March 2026 04:53:45 +0000 (0:00:03.710) 0:01:05.044 ******** 2026-03-26 04:53:47.226525 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-26 04:53:47.226539 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-26 04:55:28.869240 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-26 04:55:28.869387 | orchestrator | 2026-03-26 04:55:28.869405 | orchestrator | TASK [service-check-containers : rabbitmq | Notify handlers to restart containers] *** 2026-03-26 04:55:28.869418 | orchestrator | Thursday 26 March 2026 04:53:47 +0000 (0:00:02.187) 0:01:07.231 ******** 2026-03-26 04:55:28.869430 | orchestrator | changed: [testbed-node-0] => { 2026-03-26 04:55:28.869442 | orchestrator |  "msg": "Notifying handlers" 2026-03-26 04:55:28.869453 | orchestrator | } 2026-03-26 04:55:28.869464 | orchestrator | changed: [testbed-node-1] => { 2026-03-26 04:55:28.869475 | orchestrator |  "msg": "Notifying handlers" 2026-03-26 04:55:28.869485 | orchestrator | } 2026-03-26 04:55:28.869496 | orchestrator | changed: [testbed-node-2] => { 2026-03-26 04:55:28.869506 | orchestrator |  "msg": "Notifying handlers" 2026-03-26 04:55:28.869517 | orchestrator | } 2026-03-26 04:55:28.869528 | orchestrator | 2026-03-26 04:55:28.869539 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-26 04:55:28.869635 | orchestrator | Thursday 26 March 2026 04:53:48 +0000 (0:00:01.416) 0:01:08.648 ******** 2026-03-26 04:55:28.869753 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-26 04:55:28.869775 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:55:28.869795 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-26 04:55:28.869810 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:55:28.869856 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-26 04:55:28.869871 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:55:28.869884 | orchestrator | 2026-03-26 04:55:28.869896 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-03-26 04:55:28.869907 | orchestrator | Thursday 26 March 2026 04:53:50 +0000 (0:00:02.091) 0:01:10.740 ******** 2026-03-26 04:55:28.869917 | orchestrator | changed: [testbed-node-0] 2026-03-26 04:55:28.869928 | orchestrator | changed: [testbed-node-1] 2026-03-26 04:55:28.869939 | orchestrator | changed: [testbed-node-2] 2026-03-26 04:55:28.869949 | orchestrator | 2026-03-26 04:55:28.869960 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-26 04:55:28.869971 | orchestrator | 2026-03-26 04:55:28.869981 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-26 04:55:28.869992 | orchestrator | Thursday 26 March 2026 04:53:52 +0000 (0:00:01.778) 0:01:12.518 ******** 2026-03-26 04:55:28.870003 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:55:28.870014 | orchestrator | 2026-03-26 04:55:28.870113 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-26 04:55:28.870131 | orchestrator | Thursday 26 March 2026 04:53:54 +0000 (0:00:02.163) 0:01:14.682 ******** 2026-03-26 04:55:28.870150 | orchestrator | changed: [testbed-node-0] 2026-03-26 04:55:28.870161 | orchestrator | 2026-03-26 04:55:28.870172 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-26 04:55:28.870183 | orchestrator | Thursday 26 March 2026 04:54:03 +0000 (0:00:09.146) 0:01:23.828 ******** 2026-03-26 04:55:28.870193 | orchestrator | changed: [testbed-node-0] 2026-03-26 04:55:28.870204 | orchestrator | 2026-03-26 04:55:28.870215 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-26 04:55:28.870225 | orchestrator | Thursday 26 March 2026 04:54:12 +0000 (0:00:09.086) 0:01:32.915 ******** 2026-03-26 04:55:28.870236 | orchestrator | changed: [testbed-node-0] 2026-03-26 04:55:28.870246 | orchestrator | 2026-03-26 04:55:28.870257 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-26 04:55:28.870268 | orchestrator | 2026-03-26 04:55:28.870278 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-26 04:55:28.870289 | orchestrator | Thursday 26 March 2026 04:54:21 +0000 (0:00:08.807) 0:01:41.722 ******** 2026-03-26 04:55:28.870300 | orchestrator | ok: [testbed-node-1] 2026-03-26 04:55:28.870311 | orchestrator | 2026-03-26 04:55:28.870321 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-26 04:55:28.870332 | orchestrator | Thursday 26 March 2026 04:54:23 +0000 (0:00:01.683) 0:01:43.405 ******** 2026-03-26 04:55:28.870343 | orchestrator | changed: [testbed-node-1] 2026-03-26 04:55:28.870353 | orchestrator | 2026-03-26 04:55:28.870364 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-26 04:55:28.870374 | orchestrator | Thursday 26 March 2026 04:54:31 +0000 (0:00:08.028) 0:01:51.434 ******** 2026-03-26 04:55:28.870393 | orchestrator | changed: [testbed-node-1] 2026-03-26 04:55:28.870404 | orchestrator | 2026-03-26 04:55:28.870415 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-26 04:55:28.870425 | orchestrator | Thursday 26 March 2026 04:54:45 +0000 (0:00:13.625) 0:02:05.060 ******** 2026-03-26 04:55:28.870436 | orchestrator | changed: [testbed-node-1] 2026-03-26 04:55:28.870447 | orchestrator | 2026-03-26 04:55:28.870458 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-26 04:55:28.870469 | orchestrator | 2026-03-26 04:55:28.870479 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-26 04:55:28.870490 | orchestrator | Thursday 26 March 2026 04:54:53 +0000 (0:00:08.243) 0:02:13.303 ******** 2026-03-26 04:55:28.870501 | orchestrator | ok: [testbed-node-2] 2026-03-26 04:55:28.870512 | orchestrator | 2026-03-26 04:55:28.870528 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-26 04:55:28.870539 | orchestrator | Thursday 26 March 2026 04:54:55 +0000 (0:00:01.729) 0:02:15.033 ******** 2026-03-26 04:55:28.870582 | orchestrator | changed: [testbed-node-2] 2026-03-26 04:55:28.870594 | orchestrator | 2026-03-26 04:55:28.870604 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-26 04:55:28.870615 | orchestrator | Thursday 26 March 2026 04:55:03 +0000 (0:00:08.941) 0:02:23.974 ******** 2026-03-26 04:55:28.870626 | orchestrator | changed: [testbed-node-2] 2026-03-26 04:55:28.870637 | orchestrator | 2026-03-26 04:55:28.870647 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-26 04:55:28.870658 | orchestrator | Thursday 26 March 2026 04:55:18 +0000 (0:00:14.227) 0:02:38.202 ******** 2026-03-26 04:55:28.870669 | orchestrator | changed: [testbed-node-2] 2026-03-26 04:55:28.870679 | orchestrator | 2026-03-26 04:55:28.870690 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-03-26 04:55:28.870700 | orchestrator | 2026-03-26 04:55:28.870711 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-03-26 04:55:28.870732 | orchestrator | Thursday 26 March 2026 04:55:28 +0000 (0:00:10.661) 0:02:48.864 ******** 2026-03-26 04:55:35.196174 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 04:55:35.196269 | orchestrator | 2026-03-26 04:55:35.196279 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-03-26 04:55:35.196288 | orchestrator | Thursday 26 March 2026 04:55:30 +0000 (0:00:01.517) 0:02:50.382 ******** 2026-03-26 04:55:35.196295 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:55:35.196307 | orchestrator | ok: [testbed-node-2] 2026-03-26 04:55:35.196319 | orchestrator | ok: [testbed-node-1] 2026-03-26 04:55:35.196331 | orchestrator | 2026-03-26 04:55:35.196343 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-26 04:55:35.196356 | orchestrator | testbed-node-0 : ok=31  changed=11  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-26 04:55:35.196370 | orchestrator | testbed-node-1 : ok=24  changed=10  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-26 04:55:35.196379 | orchestrator | testbed-node-2 : ok=24  changed=10  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-26 04:55:35.196387 | orchestrator | 2026-03-26 04:55:35.196394 | orchestrator | 2026-03-26 04:55:35.196401 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-26 04:55:35.196409 | orchestrator | Thursday 26 March 2026 04:55:34 +0000 (0:00:04.463) 0:02:54.846 ******** 2026-03-26 04:55:35.196416 | orchestrator | =============================================================================== 2026-03-26 04:55:35.196424 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 36.94s 2026-03-26 04:55:35.196431 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 27.71s 2026-03-26 04:55:35.196438 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode --------------------- 26.12s 2026-03-26 04:55:35.196470 | orchestrator | rabbitmq : Get new RabbitMQ version ------------------------------------- 9.09s 2026-03-26 04:55:35.196478 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 5.58s 2026-03-26 04:55:35.196485 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 4.46s 2026-03-26 04:55:35.196492 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 3.71s 2026-03-26 04:55:35.196499 | orchestrator | rabbitmq : Get current RabbitMQ version --------------------------------- 3.40s 2026-03-26 04:55:35.196506 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 3.11s 2026-03-26 04:55:35.196513 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 2.80s 2026-03-26 04:55:35.196520 | orchestrator | rabbitmq : List RabbitMQ policies --------------------------------------- 2.75s 2026-03-26 04:55:35.196527 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 2.59s 2026-03-26 04:55:35.196534 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 2.45s 2026-03-26 04:55:35.196541 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 2.43s 2026-03-26 04:55:35.196573 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 2.40s 2026-03-26 04:55:35.196581 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 2.38s 2026-03-26 04:55:35.196589 | orchestrator | service-cert-copy : rabbitmq | Copying over extra CA certificates ------- 2.34s 2026-03-26 04:55:35.196596 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 2.32s 2026-03-26 04:55:35.196603 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 2.31s 2026-03-26 04:55:35.196610 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 2.29s 2026-03-26 04:55:35.558330 | orchestrator | + osism apply -a upgrade openvswitch 2026-03-26 04:55:37.616808 | orchestrator | 2026-03-26 04:55:37 | INFO  | Task 3fbc17bd-68e1-4e2f-bde2-7bb54783be9e (openvswitch) was prepared for execution. 2026-03-26 04:55:37.616919 | orchestrator | 2026-03-26 04:55:37 | INFO  | It takes a moment until task 3fbc17bd-68e1-4e2f-bde2-7bb54783be9e (openvswitch) has been started and output is visible here. 2026-03-26 04:55:55.283808 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-03-26 04:55:55.283927 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-03-26 04:55:55.283971 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-03-26 04:55:55.283981 | orchestrator | (): 'NoneType' object is not subscriptable 2026-03-26 04:55:55.284001 | orchestrator | 2026-03-26 04:55:55.284013 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-26 04:55:55.284023 | orchestrator | 2026-03-26 04:55:55.284033 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-26 04:55:55.284042 | orchestrator | Thursday 26 March 2026 04:55:42 +0000 (0:00:01.079) 0:00:01.079 ******** 2026-03-26 04:55:55.284052 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:55:55.284063 | orchestrator | ok: [testbed-node-1] 2026-03-26 04:55:55.284072 | orchestrator | ok: [testbed-node-2] 2026-03-26 04:55:55.284082 | orchestrator | ok: [testbed-node-3] 2026-03-26 04:55:55.284091 | orchestrator | ok: [testbed-node-4] 2026-03-26 04:55:55.284101 | orchestrator | ok: [testbed-node-5] 2026-03-26 04:55:55.284110 | orchestrator | 2026-03-26 04:55:55.284120 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-26 04:55:55.284130 | orchestrator | Thursday 26 March 2026 04:55:44 +0000 (0:00:01.553) 0:00:02.633 ******** 2026-03-26 04:55:55.284140 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-26 04:55:55.284170 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-26 04:55:55.284181 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-26 04:55:55.284190 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-26 04:55:55.284200 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-26 04:55:55.284210 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-26 04:55:55.284219 | orchestrator | 2026-03-26 04:55:55.284229 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-03-26 04:55:55.284238 | orchestrator | 2026-03-26 04:55:55.284248 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-03-26 04:55:55.284258 | orchestrator | Thursday 26 March 2026 04:55:45 +0000 (0:00:01.072) 0:00:03.706 ******** 2026-03-26 04:55:55.284268 | orchestrator | included: /ansible/roles/openvswitch/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-26 04:55:55.284279 | orchestrator | 2026-03-26 04:55:55.284289 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-26 04:55:55.284298 | orchestrator | Thursday 26 March 2026 04:55:47 +0000 (0:00:02.150) 0:00:05.856 ******** 2026-03-26 04:55:55.284308 | orchestrator | ok: [testbed-node-1] => (item=openvswitch) 2026-03-26 04:55:55.284318 | orchestrator | ok: [testbed-node-0] => (item=openvswitch) 2026-03-26 04:55:55.284328 | orchestrator | ok: [testbed-node-2] => (item=openvswitch) 2026-03-26 04:55:55.284339 | orchestrator | ok: [testbed-node-3] => (item=openvswitch) 2026-03-26 04:55:55.284350 | orchestrator | ok: [testbed-node-4] => (item=openvswitch) 2026-03-26 04:55:55.284361 | orchestrator | ok: [testbed-node-5] => (item=openvswitch) 2026-03-26 04:55:55.284371 | orchestrator | 2026-03-26 04:55:55.284382 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-26 04:55:55.284393 | orchestrator | Thursday 26 March 2026 04:55:49 +0000 (0:00:01.451) 0:00:07.308 ******** 2026-03-26 04:55:55.284404 | orchestrator | ok: [testbed-node-3] => (item=openvswitch) 2026-03-26 04:55:55.284415 | orchestrator | ok: [testbed-node-2] => (item=openvswitch) 2026-03-26 04:55:55.284426 | orchestrator | ok: [testbed-node-4] => (item=openvswitch) 2026-03-26 04:55:55.284437 | orchestrator | ok: [testbed-node-0] => (item=openvswitch) 2026-03-26 04:55:55.284448 | orchestrator | ok: [testbed-node-1] => (item=openvswitch) 2026-03-26 04:55:55.284458 | orchestrator | ok: [testbed-node-5] => (item=openvswitch) 2026-03-26 04:55:55.284469 | orchestrator | 2026-03-26 04:55:55.284480 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-26 04:55:55.284491 | orchestrator | Thursday 26 March 2026 04:55:50 +0000 (0:00:01.386) 0:00:08.694 ******** 2026-03-26 04:55:55.284502 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-03-26 04:55:55.284513 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:55:55.284523 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-03-26 04:55:55.284534 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:55:55.284605 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-03-26 04:55:55.284617 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:55:55.284629 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-03-26 04:55:55.284640 | orchestrator | skipping: [testbed-node-3] 2026-03-26 04:55:55.284651 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-03-26 04:55:55.284662 | orchestrator | skipping: [testbed-node-4] 2026-03-26 04:55:55.284673 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-03-26 04:55:55.284684 | orchestrator | skipping: [testbed-node-5] 2026-03-26 04:55:55.284694 | orchestrator | 2026-03-26 04:55:55.284704 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-03-26 04:55:55.284713 | orchestrator | Thursday 26 March 2026 04:55:52 +0000 (0:00:01.959) 0:00:10.654 ******** 2026-03-26 04:55:55.284731 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:55:55.284741 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:55:55.284750 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:55:55.284760 | orchestrator | skipping: [testbed-node-3] 2026-03-26 04:55:55.284769 | orchestrator | skipping: [testbed-node-4] 2026-03-26 04:55:55.284795 | orchestrator | skipping: [testbed-node-5] 2026-03-26 04:55:55.284806 | orchestrator | 2026-03-26 04:55:55.284816 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-03-26 04:55:55.284832 | orchestrator | Thursday 26 March 2026 04:55:53 +0000 (0:00:01.091) 0:00:11.746 ******** 2026-03-26 04:55:55.284845 | orchestrator | ok: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-26 04:55:55.284861 | orchestrator | ok: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-26 04:55:55.284871 | orchestrator | ok: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-26 04:55:55.284882 | orchestrator | ok: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-26 04:55:55.284892 | orchestrator | ok: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-26 04:55:55.284921 | orchestrator | ok: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-26 04:55:57.542168 | orchestrator | ok: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-26 04:55:57.542281 | orchestrator | ok: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-26 04:55:57.542296 | orchestrator | ok: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-26 04:55:57.542309 | orchestrator | ok: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-26 04:55:57.542351 | orchestrator | ok: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-26 04:55:57.542392 | orchestrator | ok: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-26 04:55:57.542405 | orchestrator | 2026-03-26 04:55:57.542416 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-03-26 04:55:57.542429 | orchestrator | Thursday 26 March 2026 04:55:55 +0000 (0:00:01.717) 0:00:13.463 ******** 2026-03-26 04:55:57.542439 | orchestrator | ok: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-26 04:55:57.542451 | orchestrator | ok: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-26 04:55:57.542461 | orchestrator | ok: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-26 04:55:57.542471 | orchestrator | ok: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-26 04:55:57.542493 | orchestrator | ok: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-26 04:55:57.542511 | orchestrator | ok: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-26 04:56:01.089657 | orchestrator | ok: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-26 04:56:01.089873 | orchestrator | ok: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-26 04:56:01.089897 | orchestrator | ok: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-26 04:56:01.089934 | orchestrator | ok: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-26 04:56:01.089962 | orchestrator | ok: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-26 04:56:01.089994 | orchestrator | ok: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-26 04:56:01.090007 | orchestrator | 2026-03-26 04:56:01.090086 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-03-26 04:56:01.090100 | orchestrator | Thursday 26 March 2026 04:55:57 +0000 (0:00:02.390) 0:00:15.854 ******** 2026-03-26 04:56:01.090111 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:56:01.090123 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:56:01.090134 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:56:01.090148 | orchestrator | skipping: [testbed-node-3] 2026-03-26 04:56:01.090160 | orchestrator | skipping: [testbed-node-4] 2026-03-26 04:56:01.090173 | orchestrator | skipping: [testbed-node-5] 2026-03-26 04:56:01.090187 | orchestrator | 2026-03-26 04:56:01.090200 | orchestrator | TASK [service-check-containers : openvswitch | Check containers] *************** 2026-03-26 04:56:01.090212 | orchestrator | Thursday 26 March 2026 04:55:59 +0000 (0:00:01.442) 0:00:17.296 ******** 2026-03-26 04:56:01.090227 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-26 04:56:01.090252 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-26 04:56:01.090266 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-26 04:56:01.090285 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-26 04:56:01.090308 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-26 04:56:02.478288 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-26 04:56:02.478396 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-26 04:56:02.478436 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-26 04:56:02.478448 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-26 04:56:02.478474 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-26 04:56:02.478505 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-26 04:56:02.478518 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-26 04:56:02.478538 | orchestrator | 2026-03-26 04:56:02.478624 | orchestrator | TASK [service-check-containers : openvswitch | Notify handlers to restart containers] *** 2026-03-26 04:56:02.478637 | orchestrator | Thursday 26 March 2026 04:56:01 +0000 (0:00:02.097) 0:00:19.393 ******** 2026-03-26 04:56:02.478649 | orchestrator | changed: [testbed-node-0] => { 2026-03-26 04:56:02.478661 | orchestrator |  "msg": "Notifying handlers" 2026-03-26 04:56:02.478672 | orchestrator | } 2026-03-26 04:56:02.478684 | orchestrator | changed: [testbed-node-1] => { 2026-03-26 04:56:02.478695 | orchestrator |  "msg": "Notifying handlers" 2026-03-26 04:56:02.478705 | orchestrator | } 2026-03-26 04:56:02.478716 | orchestrator | changed: [testbed-node-2] => { 2026-03-26 04:56:02.478727 | orchestrator |  "msg": "Notifying handlers" 2026-03-26 04:56:02.478737 | orchestrator | } 2026-03-26 04:56:02.478748 | orchestrator | changed: [testbed-node-3] => { 2026-03-26 04:56:02.478759 | orchestrator |  "msg": "Notifying handlers" 2026-03-26 04:56:02.478769 | orchestrator | } 2026-03-26 04:56:02.478780 | orchestrator | changed: [testbed-node-4] => { 2026-03-26 04:56:02.478791 | orchestrator |  "msg": "Notifying handlers" 2026-03-26 04:56:02.478802 | orchestrator | } 2026-03-26 04:56:02.478813 | orchestrator | changed: [testbed-node-5] => { 2026-03-26 04:56:02.478826 | orchestrator |  "msg": "Notifying handlers" 2026-03-26 04:56:02.478839 | orchestrator | } 2026-03-26 04:56:02.478851 | orchestrator | 2026-03-26 04:56:02.478863 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-26 04:56:02.478876 | orchestrator | Thursday 26 March 2026 04:56:02 +0000 (0:00:00.936) 0:00:20.330 ******** 2026-03-26 04:56:02.478890 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-03-26 04:56:02.478911 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-03-26 04:56:02.478924 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:56:02.478937 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-03-26 04:56:02.478969 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-03-26 04:56:27.691288 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:56:27.691437 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-03-26 04:56:27.691475 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-03-26 04:56:27.691497 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:56:27.691527 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-03-26 04:56:27.691571 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-03-26 04:56:27.691585 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_handler_task_start) in callback 2026-03-26 04:56:27.691597 | orchestrator | plugin (): 'NoneType' object is not subscriptable 2026-03-26 04:56:27.691646 | orchestrator | skipping: [testbed-node-3] 2026-03-26 04:56:27.691658 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-03-26 04:56:27.691689 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-03-26 04:56:27.691701 | orchestrator | skipping: [testbed-node-4] 2026-03-26 04:56:27.691713 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-03-26 04:56:27.691724 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-03-26 04:56:27.691736 | orchestrator | skipping: [testbed-node-5] 2026-03-26 04:56:27.691747 | orchestrator | 2026-03-26 04:56:27.691764 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-26 04:56:27.691776 | orchestrator | Thursday 26 March 2026 04:56:03 +0000 (0:00:01.833) 0:00:22.164 ******** 2026-03-26 04:56:27.691787 | orchestrator | 2026-03-26 04:56:27.691798 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-26 04:56:27.691809 | orchestrator | Thursday 26 March 2026 04:56:04 +0000 (0:00:00.164) 0:00:22.329 ******** 2026-03-26 04:56:27.691820 | orchestrator | 2026-03-26 04:56:27.691832 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-26 04:56:27.691845 | orchestrator | Thursday 26 March 2026 04:56:04 +0000 (0:00:00.153) 0:00:22.483 ******** 2026-03-26 04:56:27.691865 | orchestrator | 2026-03-26 04:56:27.691878 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-26 04:56:27.691890 | orchestrator | Thursday 26 March 2026 04:56:04 +0000 (0:00:00.144) 0:00:22.627 ******** 2026-03-26 04:56:27.691902 | orchestrator | 2026-03-26 04:56:27.691915 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-26 04:56:27.691927 | orchestrator | Thursday 26 March 2026 04:56:04 +0000 (0:00:00.348) 0:00:22.975 ******** 2026-03-26 04:56:27.691939 | orchestrator | 2026-03-26 04:56:27.691951 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-26 04:56:27.691963 | orchestrator | Thursday 26 March 2026 04:56:04 +0000 (0:00:00.145) 0:00:23.121 ******** 2026-03-26 04:56:27.691975 | orchestrator | 2026-03-26 04:56:27.691986 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-03-26 04:56:27.691999 | orchestrator | Thursday 26 March 2026 04:56:05 +0000 (0:00:00.150) 0:00:23.271 ******** 2026-03-26 04:56:27.692011 | orchestrator | changed: [testbed-node-4] 2026-03-26 04:56:27.692023 | orchestrator | changed: [testbed-node-5] 2026-03-26 04:56:27.692035 | orchestrator | changed: [testbed-node-1] 2026-03-26 04:56:27.692047 | orchestrator | changed: [testbed-node-3] 2026-03-26 04:56:27.692059 | orchestrator | changed: [testbed-node-0] 2026-03-26 04:56:27.692076 | orchestrator | changed: [testbed-node-2] 2026-03-26 04:56:27.692095 | orchestrator | 2026-03-26 04:56:27.692113 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-03-26 04:56:27.692132 | orchestrator | Thursday 26 March 2026 04:56:15 +0000 (0:00:10.909) 0:00:34.180 ******** 2026-03-26 04:56:27.692151 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:56:27.692171 | orchestrator | ok: [testbed-node-1] 2026-03-26 04:56:27.692190 | orchestrator | ok: [testbed-node-2] 2026-03-26 04:56:27.692209 | orchestrator | ok: [testbed-node-3] 2026-03-26 04:56:27.692228 | orchestrator | ok: [testbed-node-4] 2026-03-26 04:56:27.692246 | orchestrator | ok: [testbed-node-5] 2026-03-26 04:56:27.692262 | orchestrator | 2026-03-26 04:56:27.692273 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-03-26 04:56:27.692284 | orchestrator | Thursday 26 March 2026 04:56:17 +0000 (0:00:01.206) 0:00:35.387 ******** 2026-03-26 04:56:27.692295 | orchestrator | changed: [testbed-node-5] 2026-03-26 04:56:27.692314 | orchestrator | changed: [testbed-node-1] 2026-03-26 04:56:40.809937 | orchestrator | changed: [testbed-node-0] 2026-03-26 04:56:40.810098 | orchestrator | changed: [testbed-node-3] 2026-03-26 04:56:40.810117 | orchestrator | changed: [testbed-node-4] 2026-03-26 04:56:40.810129 | orchestrator | changed: [testbed-node-2] 2026-03-26 04:56:40.810141 | orchestrator | 2026-03-26 04:56:40.810153 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-03-26 04:56:40.810173 | orchestrator | Thursday 26 March 2026 04:56:27 +0000 (0:00:10.486) 0:00:45.873 ******** 2026-03-26 04:56:40.810186 | orchestrator | ok: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-03-26 04:56:40.810199 | orchestrator | ok: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-03-26 04:56:40.810211 | orchestrator | ok: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-03-26 04:56:40.810222 | orchestrator | ok: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-03-26 04:56:40.810233 | orchestrator | ok: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-03-26 04:56:40.810245 | orchestrator | ok: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-03-26 04:56:40.810256 | orchestrator | ok: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-03-26 04:56:40.810267 | orchestrator | ok: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-03-26 04:56:40.810303 | orchestrator | ok: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-03-26 04:56:40.810313 | orchestrator | ok: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-03-26 04:56:40.810324 | orchestrator | ok: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-03-26 04:56:40.810335 | orchestrator | ok: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-03-26 04:56:40.810346 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-26 04:56:40.810357 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-26 04:56:40.810368 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-26 04:56:40.810379 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-26 04:56:40.810404 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-26 04:56:40.810415 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-26 04:56:40.810426 | orchestrator | 2026-03-26 04:56:40.810438 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-03-26 04:56:40.810449 | orchestrator | Thursday 26 March 2026 04:56:34 +0000 (0:00:06.323) 0:00:52.197 ******** 2026-03-26 04:56:40.810461 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-03-26 04:56:40.810472 | orchestrator | skipping: [testbed-node-3] 2026-03-26 04:56:40.810483 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-03-26 04:56:40.810494 | orchestrator | skipping: [testbed-node-4] 2026-03-26 04:56:40.810505 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-03-26 04:56:40.810517 | orchestrator | skipping: [testbed-node-5] 2026-03-26 04:56:40.810528 | orchestrator | ok: [testbed-node-0] => (item=br-ex) 2026-03-26 04:56:40.810540 | orchestrator | ok: [testbed-node-1] => (item=br-ex) 2026-03-26 04:56:40.810551 | orchestrator | ok: [testbed-node-2] => (item=br-ex) 2026-03-26 04:56:40.810562 | orchestrator | 2026-03-26 04:56:40.810574 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-03-26 04:56:40.810586 | orchestrator | Thursday 26 March 2026 04:56:36 +0000 (0:00:02.279) 0:00:54.476 ******** 2026-03-26 04:56:40.810621 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-03-26 04:56:40.810631 | orchestrator | skipping: [testbed-node-3] 2026-03-26 04:56:40.810641 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-03-26 04:56:40.810652 | orchestrator | skipping: [testbed-node-4] 2026-03-26 04:56:40.810663 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-03-26 04:56:40.810674 | orchestrator | skipping: [testbed-node-5] 2026-03-26 04:56:40.810684 | orchestrator | ok: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-03-26 04:56:40.810695 | orchestrator | ok: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-03-26 04:56:40.810705 | orchestrator | ok: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-03-26 04:56:40.810715 | orchestrator | 2026-03-26 04:56:40.810726 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-26 04:56:40.810738 | orchestrator | testbed-node-0 : ok=15  changed=4  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-26 04:56:40.810750 | orchestrator | testbed-node-1 : ok=15  changed=4  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-26 04:56:40.810777 | orchestrator | testbed-node-2 : ok=15  changed=4  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-26 04:56:40.810798 | orchestrator | testbed-node-3 : ok=13  changed=4  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-26 04:56:40.810810 | orchestrator | testbed-node-4 : ok=13  changed=4  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-26 04:56:40.810820 | orchestrator | testbed-node-5 : ok=13  changed=4  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-26 04:56:40.810831 | orchestrator | 2026-03-26 04:56:40.810841 | orchestrator | 2026-03-26 04:56:40.810852 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-26 04:56:40.810862 | orchestrator | Thursday 26 March 2026 04:56:40 +0000 (0:00:04.022) 0:00:58.498 ******** 2026-03-26 04:56:40.810872 | orchestrator | =============================================================================== 2026-03-26 04:56:40.810882 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 10.91s 2026-03-26 04:56:40.810892 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 10.49s 2026-03-26 04:56:40.810903 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 6.32s 2026-03-26 04:56:40.810913 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 4.02s 2026-03-26 04:56:40.810923 | orchestrator | openvswitch : Copying over config.json files for services --------------- 2.39s 2026-03-26 04:56:40.810933 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.28s 2026-03-26 04:56:40.810944 | orchestrator | openvswitch : include_tasks --------------------------------------------- 2.15s 2026-03-26 04:56:40.810954 | orchestrator | service-check-containers : openvswitch | Check containers --------------- 2.10s 2026-03-26 04:56:40.810964 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.96s 2026-03-26 04:56:40.810973 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.83s 2026-03-26 04:56:40.810983 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 1.72s 2026-03-26 04:56:40.810993 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.55s 2026-03-26 04:56:40.811003 | orchestrator | module-load : Load modules ---------------------------------------------- 1.45s 2026-03-26 04:56:40.811013 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.44s 2026-03-26 04:56:40.811024 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 1.39s 2026-03-26 04:56:40.811034 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.21s 2026-03-26 04:56:40.811049 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.11s 2026-03-26 04:56:40.811060 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 1.09s 2026-03-26 04:56:40.811070 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.07s 2026-03-26 04:56:40.811080 | orchestrator | service-check-containers : openvswitch | Notify handlers to restart containers --- 0.94s 2026-03-26 04:56:41.121763 | orchestrator | + osism apply -a upgrade ovn 2026-03-26 04:56:43.232389 | orchestrator | 2026-03-26 04:56:43 | INFO  | Task 53508c48-b0a1-45e6-8522-6a5123067a74 (ovn) was prepared for execution. 2026-03-26 04:56:43.232486 | orchestrator | 2026-03-26 04:56:43 | INFO  | It takes a moment until task 53508c48-b0a1-45e6-8522-6a5123067a74 (ovn) has been started and output is visible here. 2026-03-26 04:57:05.445757 | orchestrator | 2026-03-26 04:57:05.445890 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-26 04:57:05.445909 | orchestrator | 2026-03-26 04:57:05.445921 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-26 04:57:05.445932 | orchestrator | Thursday 26 March 2026 04:56:48 +0000 (0:00:01.326) 0:00:01.326 ******** 2026-03-26 04:57:05.445943 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:57:05.445979 | orchestrator | ok: [testbed-node-1] 2026-03-26 04:57:05.445991 | orchestrator | ok: [testbed-node-2] 2026-03-26 04:57:05.446002 | orchestrator | ok: [testbed-node-3] 2026-03-26 04:57:05.446013 | orchestrator | ok: [testbed-node-4] 2026-03-26 04:57:05.446095 | orchestrator | ok: [testbed-node-5] 2026-03-26 04:57:05.446107 | orchestrator | 2026-03-26 04:57:05.446119 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-26 04:57:05.446130 | orchestrator | Thursday 26 March 2026 04:56:52 +0000 (0:00:03.418) 0:00:04.744 ******** 2026-03-26 04:57:05.446141 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-03-26 04:57:05.446152 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-03-26 04:57:05.446163 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-03-26 04:57:05.446173 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-03-26 04:57:05.446184 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-03-26 04:57:05.446195 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-03-26 04:57:05.446205 | orchestrator | 2026-03-26 04:57:05.446216 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-03-26 04:57:05.446229 | orchestrator | 2026-03-26 04:57:05.446245 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-03-26 04:57:05.446265 | orchestrator | Thursday 26 March 2026 04:56:55 +0000 (0:00:03.073) 0:00:07.818 ******** 2026-03-26 04:57:05.446286 | orchestrator | included: /ansible/roles/ovn-controller/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-26 04:57:05.446301 | orchestrator | 2026-03-26 04:57:05.446313 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-03-26 04:57:05.446325 | orchestrator | Thursday 26 March 2026 04:56:58 +0000 (0:00:02.771) 0:00:10.590 ******** 2026-03-26 04:57:05.446341 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:57:05.446357 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:57:05.446370 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:57:05.446383 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:57:05.446410 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:57:05.446454 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:57:05.446468 | orchestrator | 2026-03-26 04:57:05.446482 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-03-26 04:57:05.446503 | orchestrator | Thursday 26 March 2026 04:57:00 +0000 (0:00:02.402) 0:00:12.993 ******** 2026-03-26 04:57:05.446523 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:57:05.446538 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:57:05.446549 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:57:05.446560 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:57:05.446571 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:57:05.446582 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:57:05.446594 | orchestrator | 2026-03-26 04:57:05.446605 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-03-26 04:57:05.446616 | orchestrator | Thursday 26 March 2026 04:57:03 +0000 (0:00:02.655) 0:00:15.648 ******** 2026-03-26 04:57:05.446632 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:57:05.446654 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:57:05.446683 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:57:13.244666 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:57:13.244854 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:57:13.244873 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:57:13.244887 | orchestrator | 2026-03-26 04:57:13.244904 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-03-26 04:57:13.244920 | orchestrator | Thursday 26 March 2026 04:57:05 +0000 (0:00:02.288) 0:00:17.937 ******** 2026-03-26 04:57:13.244935 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:57:13.244949 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:57:13.244965 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:57:13.245024 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:57:13.245042 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:57:13.245069 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:57:13.245106 | orchestrator | 2026-03-26 04:57:13.245116 | orchestrator | TASK [service-check-containers : ovn_controller | Check containers] ************ 2026-03-26 04:57:13.245125 | orchestrator | Thursday 26 March 2026 04:57:08 +0000 (0:00:03.144) 0:00:21.081 ******** 2026-03-26 04:57:13.245135 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:57:13.245148 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:57:13.245157 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:57:13.245166 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:57:13.245174 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:57:13.245191 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 04:57:13.245202 | orchestrator | 2026-03-26 04:57:13.245212 | orchestrator | TASK [service-check-containers : ovn_controller | Notify handlers to restart containers] *** 2026-03-26 04:57:13.245228 | orchestrator | Thursday 26 March 2026 04:57:11 +0000 (0:00:02.529) 0:00:23.611 ******** 2026-03-26 04:57:13.245239 | orchestrator | changed: [testbed-node-0] => { 2026-03-26 04:57:13.245250 | orchestrator |  "msg": "Notifying handlers" 2026-03-26 04:57:13.245261 | orchestrator | } 2026-03-26 04:57:13.245271 | orchestrator | changed: [testbed-node-1] => { 2026-03-26 04:57:13.245281 | orchestrator |  "msg": "Notifying handlers" 2026-03-26 04:57:13.245290 | orchestrator | } 2026-03-26 04:57:13.245300 | orchestrator | changed: [testbed-node-2] => { 2026-03-26 04:57:13.245310 | orchestrator |  "msg": "Notifying handlers" 2026-03-26 04:57:13.245319 | orchestrator | } 2026-03-26 04:57:13.245329 | orchestrator | changed: [testbed-node-3] => { 2026-03-26 04:57:13.245339 | orchestrator |  "msg": "Notifying handlers" 2026-03-26 04:57:13.245348 | orchestrator | } 2026-03-26 04:57:13.245358 | orchestrator | changed: [testbed-node-4] => { 2026-03-26 04:57:13.245366 | orchestrator |  "msg": "Notifying handlers" 2026-03-26 04:57:13.245375 | orchestrator | } 2026-03-26 04:57:13.245383 | orchestrator | changed: [testbed-node-5] => { 2026-03-26 04:57:13.245391 | orchestrator |  "msg": "Notifying handlers" 2026-03-26 04:57:13.245400 | orchestrator | } 2026-03-26 04:57:13.245408 | orchestrator | 2026-03-26 04:57:13.245417 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-26 04:57:13.245426 | orchestrator | Thursday 26 March 2026 04:57:13 +0000 (0:00:02.009) 0:00:25.621 ******** 2026-03-26 04:57:13.245445 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:57:42.528958 | orchestrator | skipping: [testbed-node-0] 2026-03-26 04:57:42.529097 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:57:42.529118 | orchestrator | skipping: [testbed-node-1] 2026-03-26 04:57:42.529170 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:57:42.529185 | orchestrator | skipping: [testbed-node-2] 2026-03-26 04:57:42.529197 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:57:42.529237 | orchestrator | skipping: [testbed-node-3] 2026-03-26 04:57:42.529249 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:57:42.529261 | orchestrator | skipping: [testbed-node-4] 2026-03-26 04:57:42.529272 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 04:57:42.529283 | orchestrator | skipping: [testbed-node-5] 2026-03-26 04:57:42.529294 | orchestrator | 2026-03-26 04:57:42.529305 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-03-26 04:57:42.529318 | orchestrator | Thursday 26 March 2026 04:57:15 +0000 (0:00:02.506) 0:00:28.127 ******** 2026-03-26 04:57:42.529329 | orchestrator | ok: [testbed-node-1] 2026-03-26 04:57:42.529340 | orchestrator | ok: [testbed-node-0] 2026-03-26 04:57:42.529351 | orchestrator | ok: [testbed-node-2] 2026-03-26 04:57:42.529362 | orchestrator | ok: [testbed-node-3] 2026-03-26 04:57:42.529372 | orchestrator | ok: [testbed-node-5] 2026-03-26 04:57:42.529383 | orchestrator | ok: [testbed-node-4] 2026-03-26 04:57:42.529394 | orchestrator | 2026-03-26 04:57:42.529419 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-03-26 04:57:42.529431 | orchestrator | Thursday 26 March 2026 04:57:19 +0000 (0:00:03.550) 0:00:31.677 ******** 2026-03-26 04:57:42.529443 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-03-26 04:57:42.529457 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-03-26 04:57:42.529469 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-03-26 04:57:42.529481 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-03-26 04:57:42.529493 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-03-26 04:57:42.529505 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-03-26 04:57:42.529517 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-26 04:57:42.529529 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-26 04:57:42.529541 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-26 04:57:42.529553 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-26 04:57:42.529565 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-26 04:57:42.529597 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-26 04:57:42.529610 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-03-26 04:57:42.529632 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-03-26 04:57:42.529645 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-03-26 04:57:42.529658 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-03-26 04:57:42.529670 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-03-26 04:57:42.529682 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-03-26 04:57:42.529694 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-26 04:57:42.529707 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-26 04:57:42.529719 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-26 04:57:42.529731 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-26 04:57:42.529743 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-26 04:57:42.529756 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-26 04:57:42.529767 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-26 04:57:42.529780 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-26 04:57:42.529792 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-26 04:57:42.529803 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-26 04:57:42.529813 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-26 04:57:42.529824 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-26 04:57:42.529835 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-26 04:57:42.529846 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-26 04:57:42.529856 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-26 04:57:42.529867 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-26 04:57:42.529878 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-26 04:57:42.529912 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-26 04:57:42.529923 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-26 04:57:42.529934 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-26 04:57:42.529950 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-26 04:57:42.529962 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-26 04:57:42.529972 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-26 04:57:42.529983 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-26 04:57:42.529994 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-03-26 04:57:42.530073 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-03-26 04:57:42.530089 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-03-26 04:57:42.530099 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-03-26 04:57:42.530110 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-03-26 04:57:42.530129 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-03-26 05:00:30.661355 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-26 05:00:30.661488 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-26 05:00:30.661506 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-26 05:00:30.661518 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-26 05:00:30.661529 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-26 05:00:30.661573 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-26 05:00:30.661585 | orchestrator | 2026-03-26 05:00:30.661597 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-26 05:00:30.661609 | orchestrator | Thursday 26 March 2026 04:57:39 +0000 (0:00:20.184) 0:00:51.861 ******** 2026-03-26 05:00:30.661620 | orchestrator | 2026-03-26 05:00:30.661631 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-26 05:00:30.661642 | orchestrator | Thursday 26 March 2026 04:57:39 +0000 (0:00:00.458) 0:00:52.320 ******** 2026-03-26 05:00:30.661653 | orchestrator | 2026-03-26 05:00:30.661664 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-26 05:00:30.661675 | orchestrator | Thursday 26 March 2026 04:57:40 +0000 (0:00:00.464) 0:00:52.785 ******** 2026-03-26 05:00:30.661686 | orchestrator | 2026-03-26 05:00:30.661697 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-26 05:00:30.661708 | orchestrator | Thursday 26 March 2026 04:57:40 +0000 (0:00:00.445) 0:00:53.231 ******** 2026-03-26 05:00:30.661719 | orchestrator | 2026-03-26 05:00:30.661730 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-26 05:00:30.661741 | orchestrator | Thursday 26 March 2026 04:57:41 +0000 (0:00:00.450) 0:00:53.681 ******** 2026-03-26 05:00:30.661752 | orchestrator | 2026-03-26 05:00:30.661763 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-26 05:00:30.661773 | orchestrator | Thursday 26 March 2026 04:57:41 +0000 (0:00:00.457) 0:00:54.139 ******** 2026-03-26 05:00:30.661784 | orchestrator | 2026-03-26 05:00:30.661795 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-03-26 05:00:30.661806 | orchestrator | Thursday 26 March 2026 04:57:42 +0000 (0:00:00.841) 0:00:54.980 ******** 2026-03-26 05:00:30.661817 | orchestrator | 2026-03-26 05:00:30.661828 | orchestrator | STILL ALIVE [task 'ovn-controller : Restart ovn-controller container' is running] *** 2026-03-26 05:00:30.661840 | orchestrator | changed: [testbed-node-3] 2026-03-26 05:00:30.661853 | orchestrator | changed: [testbed-node-4] 2026-03-26 05:00:30.661866 | orchestrator | changed: [testbed-node-1] 2026-03-26 05:00:30.661879 | orchestrator | changed: [testbed-node-0] 2026-03-26 05:00:30.661891 | orchestrator | changed: [testbed-node-5] 2026-03-26 05:00:30.661903 | orchestrator | changed: [testbed-node-2] 2026-03-26 05:00:30.661942 | orchestrator | 2026-03-26 05:00:30.661954 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-03-26 05:00:30.661965 | orchestrator | 2026-03-26 05:00:30.661976 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-26 05:00:30.661987 | orchestrator | Thursday 26 March 2026 04:59:54 +0000 (0:02:11.773) 0:03:06.754 ******** 2026-03-26 05:00:30.661998 | orchestrator | included: /ansible/roles/ovn-db/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 05:00:30.662009 | orchestrator | 2026-03-26 05:00:30.662097 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-26 05:00:30.662116 | orchestrator | Thursday 26 March 2026 04:59:56 +0000 (0:00:01.950) 0:03:08.705 ******** 2026-03-26 05:00:30.662162 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-26 05:00:30.662184 | orchestrator | 2026-03-26 05:00:30.662204 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-03-26 05:00:30.662223 | orchestrator | Thursday 26 March 2026 04:59:58 +0000 (0:00:01.933) 0:03:10.639 ******** 2026-03-26 05:00:30.662241 | orchestrator | ok: [testbed-node-1] 2026-03-26 05:00:30.662262 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:00:30.662273 | orchestrator | ok: [testbed-node-2] 2026-03-26 05:00:30.662284 | orchestrator | 2026-03-26 05:00:30.662295 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-03-26 05:00:30.662305 | orchestrator | Thursday 26 March 2026 04:59:59 +0000 (0:00:01.802) 0:03:12.442 ******** 2026-03-26 05:00:30.662316 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:00:30.662327 | orchestrator | ok: [testbed-node-1] 2026-03-26 05:00:30.662337 | orchestrator | ok: [testbed-node-2] 2026-03-26 05:00:30.662348 | orchestrator | 2026-03-26 05:00:30.662358 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-03-26 05:00:30.662369 | orchestrator | Thursday 26 March 2026 05:00:01 +0000 (0:00:01.355) 0:03:13.797 ******** 2026-03-26 05:00:30.662380 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:00:30.662390 | orchestrator | ok: [testbed-node-1] 2026-03-26 05:00:30.662401 | orchestrator | ok: [testbed-node-2] 2026-03-26 05:00:30.662411 | orchestrator | 2026-03-26 05:00:30.662422 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-03-26 05:00:30.662433 | orchestrator | Thursday 26 March 2026 05:00:02 +0000 (0:00:01.418) 0:03:15.216 ******** 2026-03-26 05:00:30.662443 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:00:30.662454 | orchestrator | ok: [testbed-node-1] 2026-03-26 05:00:30.662464 | orchestrator | ok: [testbed-node-2] 2026-03-26 05:00:30.662475 | orchestrator | 2026-03-26 05:00:30.662485 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-03-26 05:00:30.662496 | orchestrator | Thursday 26 March 2026 05:00:04 +0000 (0:00:01.650) 0:03:16.866 ******** 2026-03-26 05:00:30.662507 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:00:30.662574 | orchestrator | ok: [testbed-node-1] 2026-03-26 05:00:30.662587 | orchestrator | ok: [testbed-node-2] 2026-03-26 05:00:30.662597 | orchestrator | 2026-03-26 05:00:30.662608 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-03-26 05:00:30.662619 | orchestrator | Thursday 26 March 2026 05:00:05 +0000 (0:00:01.423) 0:03:18.290 ******** 2026-03-26 05:00:30.662630 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:00:30.662641 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:00:30.662651 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:00:30.662662 | orchestrator | 2026-03-26 05:00:30.662672 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-03-26 05:00:30.662683 | orchestrator | Thursday 26 March 2026 05:00:07 +0000 (0:00:01.482) 0:03:19.772 ******** 2026-03-26 05:00:30.662694 | orchestrator | ok: [testbed-node-1] 2026-03-26 05:00:30.662704 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:00:30.662715 | orchestrator | ok: [testbed-node-2] 2026-03-26 05:00:30.662725 | orchestrator | 2026-03-26 05:00:30.662736 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-03-26 05:00:30.662758 | orchestrator | Thursday 26 March 2026 05:00:08 +0000 (0:00:01.687) 0:03:21.459 ******** 2026-03-26 05:00:30.662769 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:00:30.662780 | orchestrator | ok: [testbed-node-1] 2026-03-26 05:00:30.662791 | orchestrator | ok: [testbed-node-2] 2026-03-26 05:00:30.662801 | orchestrator | 2026-03-26 05:00:30.662812 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-03-26 05:00:30.662823 | orchestrator | Thursday 26 March 2026 05:00:10 +0000 (0:00:01.505) 0:03:22.965 ******** 2026-03-26 05:00:30.662833 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:00:30.662844 | orchestrator | ok: [testbed-node-1] 2026-03-26 05:00:30.662855 | orchestrator | ok: [testbed-node-2] 2026-03-26 05:00:30.662865 | orchestrator | 2026-03-26 05:00:30.662876 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-03-26 05:00:30.662887 | orchestrator | Thursday 26 March 2026 05:00:12 +0000 (0:00:01.863) 0:03:24.828 ******** 2026-03-26 05:00:30.662898 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:00:30.662908 | orchestrator | ok: [testbed-node-1] 2026-03-26 05:00:30.662919 | orchestrator | ok: [testbed-node-2] 2026-03-26 05:00:30.662929 | orchestrator | 2026-03-26 05:00:30.662940 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-03-26 05:00:30.662951 | orchestrator | Thursday 26 March 2026 05:00:13 +0000 (0:00:01.432) 0:03:26.260 ******** 2026-03-26 05:00:30.662962 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:00:30.662972 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:00:30.662983 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:00:30.662994 | orchestrator | 2026-03-26 05:00:30.663004 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-03-26 05:00:30.663015 | orchestrator | Thursday 26 March 2026 05:00:15 +0000 (0:00:01.333) 0:03:27.594 ******** 2026-03-26 05:00:30.663026 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:00:30.663037 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:00:30.663047 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:00:30.663058 | orchestrator | 2026-03-26 05:00:30.663069 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-03-26 05:00:30.663079 | orchestrator | Thursday 26 March 2026 05:00:16 +0000 (0:00:01.425) 0:03:29.019 ******** 2026-03-26 05:00:30.663090 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:00:30.663101 | orchestrator | ok: [testbed-node-1] 2026-03-26 05:00:30.663111 | orchestrator | ok: [testbed-node-2] 2026-03-26 05:00:30.663122 | orchestrator | 2026-03-26 05:00:30.663133 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-03-26 05:00:30.663144 | orchestrator | Thursday 26 March 2026 05:00:18 +0000 (0:00:01.814) 0:03:30.834 ******** 2026-03-26 05:00:30.663154 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:00:30.663165 | orchestrator | ok: [testbed-node-1] 2026-03-26 05:00:30.663175 | orchestrator | ok: [testbed-node-2] 2026-03-26 05:00:30.663186 | orchestrator | 2026-03-26 05:00:30.663197 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-03-26 05:00:30.663208 | orchestrator | Thursday 26 March 2026 05:00:19 +0000 (0:00:01.345) 0:03:32.180 ******** 2026-03-26 05:00:30.663218 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:00:30.663229 | orchestrator | ok: [testbed-node-1] 2026-03-26 05:00:30.663240 | orchestrator | ok: [testbed-node-2] 2026-03-26 05:00:30.663250 | orchestrator | 2026-03-26 05:00:30.663261 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-03-26 05:00:30.663278 | orchestrator | Thursday 26 March 2026 05:00:21 +0000 (0:00:02.167) 0:03:34.348 ******** 2026-03-26 05:00:30.663289 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:00:30.663300 | orchestrator | ok: [testbed-node-1] 2026-03-26 05:00:30.663311 | orchestrator | ok: [testbed-node-2] 2026-03-26 05:00:30.663321 | orchestrator | 2026-03-26 05:00:30.663332 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-03-26 05:00:30.663343 | orchestrator | Thursday 26 March 2026 05:00:23 +0000 (0:00:01.441) 0:03:35.790 ******** 2026-03-26 05:00:30.663361 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:00:30.663372 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:00:30.663383 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:00:30.663394 | orchestrator | 2026-03-26 05:00:30.663404 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-26 05:00:30.663415 | orchestrator | Thursday 26 March 2026 05:00:24 +0000 (0:00:01.411) 0:03:37.201 ******** 2026-03-26 05:00:30.663425 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:00:30.663436 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:00:30.663447 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:00:30.663457 | orchestrator | 2026-03-26 05:00:30.663468 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-03-26 05:00:30.663479 | orchestrator | Thursday 26 March 2026 05:00:26 +0000 (0:00:01.722) 0:03:38.923 ******** 2026-03-26 05:00:30.663500 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 05:00:37.191112 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 05:00:37.191225 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 05:00:37.191243 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 05:00:37.191256 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 05:00:37.191285 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 05:00:37.191320 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 05:00:37.191333 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 05:00:37.191365 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 05:00:37.191377 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 05:00:37.191388 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 05:00:37.191400 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 05:00:37.191412 | orchestrator | 2026-03-26 05:00:37.191425 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-03-26 05:00:37.191438 | orchestrator | Thursday 26 March 2026 05:00:30 +0000 (0:00:04.227) 0:03:43.151 ******** 2026-03-26 05:00:37.191450 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 05:00:37.191476 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 05:00:37.191489 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 05:00:37.191500 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 05:00:37.191521 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 05:00:52.204077 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 05:00:52.204206 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 05:00:52.204224 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 05:00:52.204262 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 05:00:52.204289 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 05:00:52.204304 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 05:00:52.204323 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 05:00:52.204342 | orchestrator | 2026-03-26 05:00:52.204363 | orchestrator | TASK [ovn-db : Ensure configuration for relays exists] ************************* 2026-03-26 05:00:52.204383 | orchestrator | Thursday 26 March 2026 05:00:37 +0000 (0:00:06.531) 0:03:49.683 ******** 2026-03-26 05:00:52.204396 | orchestrator | included: /ansible/roles/ovn-db/tasks/config-relay.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=1) 2026-03-26 05:00:52.204407 | orchestrator | 2026-03-26 05:00:52.204418 | orchestrator | TASK [ovn-db : Ensuring config directories exist for OVN relay containers] ***** 2026-03-26 05:00:52.204429 | orchestrator | Thursday 26 March 2026 05:00:39 +0000 (0:00:01.944) 0:03:51.627 ******** 2026-03-26 05:00:52.204440 | orchestrator | changed: [testbed-node-0] 2026-03-26 05:00:52.204452 | orchestrator | changed: [testbed-node-1] 2026-03-26 05:00:52.204478 | orchestrator | changed: [testbed-node-2] 2026-03-26 05:00:52.204490 | orchestrator | 2026-03-26 05:00:52.204501 | orchestrator | TASK [ovn-db : Copying over config.json files for OVN relay services] ********** 2026-03-26 05:00:52.204512 | orchestrator | Thursday 26 March 2026 05:00:40 +0000 (0:00:01.864) 0:03:53.492 ******** 2026-03-26 05:00:52.204523 | orchestrator | changed: [testbed-node-1] 2026-03-26 05:00:52.204533 | orchestrator | changed: [testbed-node-0] 2026-03-26 05:00:52.204544 | orchestrator | changed: [testbed-node-2] 2026-03-26 05:00:52.204554 | orchestrator | 2026-03-26 05:00:52.204565 | orchestrator | TASK [ovn-db : Generate config files for OVN relay services] ******************* 2026-03-26 05:00:52.204576 | orchestrator | Thursday 26 March 2026 05:00:43 +0000 (0:00:02.670) 0:03:56.163 ******** 2026-03-26 05:00:52.204586 | orchestrator | changed: [testbed-node-0] 2026-03-26 05:00:52.204597 | orchestrator | changed: [testbed-node-1] 2026-03-26 05:00:52.204643 | orchestrator | changed: [testbed-node-2] 2026-03-26 05:00:52.204657 | orchestrator | 2026-03-26 05:00:52.204669 | orchestrator | TASK [service-check-containers : ovn_db | Check containers] ******************** 2026-03-26 05:00:52.204693 | orchestrator | Thursday 26 March 2026 05:00:46 +0000 (0:00:03.022) 0:03:59.186 ******** 2026-03-26 05:00:52.204708 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 05:00:52.204722 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 05:00:52.204742 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 05:00:52.204756 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 05:00:52.204769 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 05:00:52.204783 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 05:00:52.204804 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 05:00:57.028106 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 05:00:57.028212 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 05:00:57.028244 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 05:00:57.028289 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 05:00:57.028309 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 05:00:57.028330 | orchestrator | 2026-03-26 05:00:57.028352 | orchestrator | TASK [service-check-containers : ovn_db | Notify handlers to restart containers] *** 2026-03-26 05:00:57.028372 | orchestrator | Thursday 26 March 2026 05:00:52 +0000 (0:00:05.498) 0:04:04.684 ******** 2026-03-26 05:00:57.028392 | orchestrator | changed: [testbed-node-0] => { 2026-03-26 05:00:57.028407 | orchestrator |  "msg": "Notifying handlers" 2026-03-26 05:00:57.028418 | orchestrator | } 2026-03-26 05:00:57.028430 | orchestrator | changed: [testbed-node-1] => { 2026-03-26 05:00:57.028440 | orchestrator |  "msg": "Notifying handlers" 2026-03-26 05:00:57.028451 | orchestrator | } 2026-03-26 05:00:57.028461 | orchestrator | changed: [testbed-node-2] => { 2026-03-26 05:00:57.028472 | orchestrator |  "msg": "Notifying handlers" 2026-03-26 05:00:57.028483 | orchestrator | } 2026-03-26 05:00:57.028494 | orchestrator | 2026-03-26 05:00:57.028505 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-03-26 05:00:57.028515 | orchestrator | Thursday 26 March 2026 05:00:53 +0000 (0:00:01.527) 0:04:06.212 ******** 2026-03-26 05:00:57.028528 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 05:00:57.028586 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 05:00:57.028599 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 05:00:57.028611 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 05:00:57.028670 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 05:00:57.028685 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 05:00:57.028699 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 05:00:57.028715 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 05:00:57.028746 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-26 05:00:57.028778 | orchestrator | included: /ansible/roles/service-check-containers/tasks/iterated.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-26 05:02:26.508400 | orchestrator | 2026-03-26 05:02:26.508515 | orchestrator | TASK [service-check-containers : ovn_db | Check containers with iteration] ***** 2026-03-26 05:02:26.508532 | orchestrator | Thursday 26 March 2026 05:00:57 +0000 (0:00:03.304) 0:04:09.517 ******** 2026-03-26 05:02:26.508543 | orchestrator | changed: [testbed-node-0] => (item=[1]) 2026-03-26 05:02:26.508554 | orchestrator | changed: [testbed-node-1] => (item=[1]) 2026-03-26 05:02:26.508563 | orchestrator | changed: [testbed-node-2] => (item=[1]) 2026-03-26 05:02:26.508573 | orchestrator | 2026-03-26 05:02:26.508583 | orchestrator | TASK [service-check-containers : ovn_db | Notify handlers to restart containers] *** 2026-03-26 05:02:26.508594 | orchestrator | Thursday 26 March 2026 05:00:59 +0000 (0:00:02.246) 0:04:11.764 ******** 2026-03-26 05:02:26.508604 | orchestrator | changed: [testbed-node-0] => { 2026-03-26 05:02:26.508616 | orchestrator |  "msg": "Notifying handlers" 2026-03-26 05:02:26.508626 | orchestrator | } 2026-03-26 05:02:26.508636 | orchestrator | changed: [testbed-node-1] => { 2026-03-26 05:02:26.508645 | orchestrator |  "msg": "Notifying handlers" 2026-03-26 05:02:26.508655 | orchestrator | } 2026-03-26 05:02:26.508664 | orchestrator | changed: [testbed-node-2] => { 2026-03-26 05:02:26.508674 | orchestrator |  "msg": "Notifying handlers" 2026-03-26 05:02:26.508683 | orchestrator | } 2026-03-26 05:02:26.508693 | orchestrator | 2026-03-26 05:02:26.508703 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-26 05:02:26.508713 | orchestrator | Thursday 26 March 2026 05:01:00 +0000 (0:00:01.561) 0:04:13.325 ******** 2026-03-26 05:02:26.508722 | orchestrator | 2026-03-26 05:02:26.508732 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-26 05:02:26.508742 | orchestrator | Thursday 26 March 2026 05:01:01 +0000 (0:00:00.424) 0:04:13.750 ******** 2026-03-26 05:02:26.508751 | orchestrator | 2026-03-26 05:02:26.508761 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-26 05:02:26.508786 | orchestrator | Thursday 26 March 2026 05:01:01 +0000 (0:00:00.453) 0:04:14.203 ******** 2026-03-26 05:02:26.508796 | orchestrator | 2026-03-26 05:02:26.508805 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-03-26 05:02:26.508815 | orchestrator | Thursday 26 March 2026 05:01:02 +0000 (0:00:01.021) 0:04:15.225 ******** 2026-03-26 05:02:26.508824 | orchestrator | changed: [testbed-node-1] 2026-03-26 05:02:26.508834 | orchestrator | changed: [testbed-node-2] 2026-03-26 05:02:26.508844 | orchestrator | changed: [testbed-node-0] 2026-03-26 05:02:26.508853 | orchestrator | 2026-03-26 05:02:26.508863 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-03-26 05:02:26.508873 | orchestrator | Thursday 26 March 2026 05:01:19 +0000 (0:00:16.856) 0:04:32.081 ******** 2026-03-26 05:02:26.508961 | orchestrator | changed: [testbed-node-1] 2026-03-26 05:02:26.508974 | orchestrator | changed: [testbed-node-0] 2026-03-26 05:02:26.508986 | orchestrator | changed: [testbed-node-2] 2026-03-26 05:02:26.508997 | orchestrator | 2026-03-26 05:02:26.509008 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db-relay container] ******************* 2026-03-26 05:02:26.509019 | orchestrator | Thursday 26 March 2026 05:01:36 +0000 (0:00:16.931) 0:04:49.013 ******** 2026-03-26 05:02:26.509029 | orchestrator | changed: [testbed-node-2] => (item=1) 2026-03-26 05:02:26.509041 | orchestrator | changed: [testbed-node-1] => (item=1) 2026-03-26 05:02:26.509052 | orchestrator | changed: [testbed-node-0] => (item=1) 2026-03-26 05:02:26.509063 | orchestrator | 2026-03-26 05:02:26.509074 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-03-26 05:02:26.509085 | orchestrator | Thursday 26 March 2026 05:01:47 +0000 (0:00:11.413) 0:05:00.426 ******** 2026-03-26 05:02:26.509097 | orchestrator | changed: [testbed-node-0] 2026-03-26 05:02:26.509108 | orchestrator | changed: [testbed-node-1] 2026-03-26 05:02:26.509119 | orchestrator | changed: [testbed-node-2] 2026-03-26 05:02:26.509130 | orchestrator | 2026-03-26 05:02:26.509142 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-03-26 05:02:26.509152 | orchestrator | Thursday 26 March 2026 05:02:05 +0000 (0:00:17.732) 0:05:18.159 ******** 2026-03-26 05:02:26.509164 | orchestrator | Pausing for 5 seconds 2026-03-26 05:02:26.509175 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:02:26.509186 | orchestrator | 2026-03-26 05:02:26.509197 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-03-26 05:02:26.509208 | orchestrator | Thursday 26 March 2026 05:02:11 +0000 (0:00:06.175) 0:05:24.334 ******** 2026-03-26 05:02:26.509219 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:02:26.509230 | orchestrator | ok: [testbed-node-1] 2026-03-26 05:02:26.509241 | orchestrator | ok: [testbed-node-2] 2026-03-26 05:02:26.509252 | orchestrator | 2026-03-26 05:02:26.509263 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-03-26 05:02:26.509274 | orchestrator | Thursday 26 March 2026 05:02:13 +0000 (0:00:01.883) 0:05:26.217 ******** 2026-03-26 05:02:26.509285 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:02:26.509296 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:02:26.509306 | orchestrator | changed: [testbed-node-1] 2026-03-26 05:02:26.509316 | orchestrator | 2026-03-26 05:02:26.509325 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-03-26 05:02:26.509335 | orchestrator | Thursday 26 March 2026 05:02:15 +0000 (0:00:01.825) 0:05:28.043 ******** 2026-03-26 05:02:26.509344 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:02:26.509354 | orchestrator | ok: [testbed-node-1] 2026-03-26 05:02:26.509363 | orchestrator | ok: [testbed-node-2] 2026-03-26 05:02:26.509372 | orchestrator | 2026-03-26 05:02:26.509382 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-03-26 05:02:26.509391 | orchestrator | Thursday 26 March 2026 05:02:17 +0000 (0:00:01.865) 0:05:29.908 ******** 2026-03-26 05:02:26.509401 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:02:26.509410 | orchestrator | changed: [testbed-node-0] 2026-03-26 05:02:26.509420 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:02:26.509430 | orchestrator | 2026-03-26 05:02:26.509439 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-03-26 05:02:26.509448 | orchestrator | Thursday 26 March 2026 05:02:19 +0000 (0:00:01.640) 0:05:31.549 ******** 2026-03-26 05:02:26.509458 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:02:26.509467 | orchestrator | ok: [testbed-node-1] 2026-03-26 05:02:26.509477 | orchestrator | ok: [testbed-node-2] 2026-03-26 05:02:26.509486 | orchestrator | 2026-03-26 05:02:26.509496 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-03-26 05:02:26.509521 | orchestrator | Thursday 26 March 2026 05:02:20 +0000 (0:00:01.938) 0:05:33.487 ******** 2026-03-26 05:02:26.509532 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:02:26.509549 | orchestrator | ok: [testbed-node-1] 2026-03-26 05:02:26.509558 | orchestrator | ok: [testbed-node-2] 2026-03-26 05:02:26.509568 | orchestrator | 2026-03-26 05:02:26.509577 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db-relay] *************************************** 2026-03-26 05:02:26.509587 | orchestrator | Thursday 26 March 2026 05:02:22 +0000 (0:00:01.912) 0:05:35.399 ******** 2026-03-26 05:02:26.509596 | orchestrator | ok: [testbed-node-0] => (item=1) 2026-03-26 05:02:26.509606 | orchestrator | ok: [testbed-node-1] => (item=1) 2026-03-26 05:02:26.509616 | orchestrator | ok: [testbed-node-2] => (item=1) 2026-03-26 05:02:26.509625 | orchestrator | 2026-03-26 05:02:26.509634 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-26 05:02:26.509645 | orchestrator | testbed-node-0 : ok=49  changed=16  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-26 05:02:26.509656 | orchestrator | testbed-node-1 : ok=48  changed=16  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-26 05:02:26.509666 | orchestrator | testbed-node-2 : ok=47  changed=15  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-03-26 05:02:26.509675 | orchestrator | testbed-node-3 : ok=12  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-26 05:02:26.509690 | orchestrator | testbed-node-4 : ok=12  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-26 05:02:26.509700 | orchestrator | testbed-node-5 : ok=12  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-26 05:02:26.509709 | orchestrator | 2026-03-26 05:02:26.509719 | orchestrator | 2026-03-26 05:02:26.509729 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-26 05:02:26.509738 | orchestrator | Thursday 26 March 2026 05:02:26 +0000 (0:00:03.159) 0:05:38.559 ******** 2026-03-26 05:02:26.509748 | orchestrator | =============================================================================== 2026-03-26 05:02:26.509757 | orchestrator | ovn-controller : Restart ovn-controller container --------------------- 131.77s 2026-03-26 05:02:26.509767 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 20.18s 2026-03-26 05:02:26.509777 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 17.73s 2026-03-26 05:02:26.509786 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 16.93s 2026-03-26 05:02:26.509796 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 16.86s 2026-03-26 05:02:26.509805 | orchestrator | ovn-db : Restart ovn-sb-db-relay container ----------------------------- 11.41s 2026-03-26 05:02:26.509815 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 6.53s 2026-03-26 05:02:26.509824 | orchestrator | ovn-db : Wait for leader election --------------------------------------- 6.18s 2026-03-26 05:02:26.509834 | orchestrator | service-check-containers : ovn_db | Check containers -------------------- 5.50s 2026-03-26 05:02:26.509843 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 4.23s 2026-03-26 05:02:26.509853 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 3.55s 2026-03-26 05:02:26.509862 | orchestrator | Group hosts based on Kolla action --------------------------------------- 3.42s 2026-03-26 05:02:26.509872 | orchestrator | service-check-containers : Include tasks -------------------------------- 3.30s 2026-03-26 05:02:26.509901 | orchestrator | ovn-db : Wait for ovn-sb-db-relay --------------------------------------- 3.16s 2026-03-26 05:02:26.509912 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 3.14s 2026-03-26 05:02:26.509921 | orchestrator | ovn-controller : Flush handlers ----------------------------------------- 3.12s 2026-03-26 05:02:26.509931 | orchestrator | Group hosts based on enabled services ----------------------------------- 3.07s 2026-03-26 05:02:26.509948 | orchestrator | ovn-db : Generate config files for OVN relay services ------------------- 3.02s 2026-03-26 05:02:26.509958 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 2.77s 2026-03-26 05:02:26.509967 | orchestrator | ovn-db : Copying over config.json files for OVN relay services ---------- 2.67s 2026-03-26 05:02:26.838336 | orchestrator | + [[ false == \f\a\l\s\e ]] 2026-03-26 05:02:26.838438 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-03-26 05:02:26.838454 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh 2026-03-26 05:02:26.849072 | orchestrator | + set -e 2026-03-26 05:02:26.849143 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-26 05:02:26.849156 | orchestrator | ++ export INTERACTIVE=false 2026-03-26 05:02:26.849168 | orchestrator | ++ INTERACTIVE=false 2026-03-26 05:02:26.849177 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-26 05:02:26.849187 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-26 05:02:26.849197 | orchestrator | + osism apply ceph-rolling_update -e ireallymeanit=yes 2026-03-26 05:02:29.106188 | orchestrator | 2026-03-26 05:02:29 | INFO  | Task 3cc48515-9882-4eae-9b27-8ff8065ea8e3 (ceph-rolling_update) was prepared for execution. 2026-03-26 05:02:29.106256 | orchestrator | 2026-03-26 05:02:29 | INFO  | It takes a moment until task 3cc48515-9882-4eae-9b27-8ff8065ea8e3 (ceph-rolling_update) has been started and output is visible here. 2026-03-26 05:03:56.309007 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-26 05:03:56.309108 | orchestrator | 2.16.14 2026-03-26 05:03:56.309141 | orchestrator | 2026-03-26 05:03:56.309152 | orchestrator | PLAY [Confirm whether user really meant to upgrade the cluster] **************** 2026-03-26 05:03:56.309163 | orchestrator | 2026-03-26 05:03:56.309172 | orchestrator | TASK [Exit playbook, if user did not mean to upgrade cluster] ****************** 2026-03-26 05:03:56.309181 | orchestrator | Thursday 26 March 2026 05:02:38 +0000 (0:00:01.792) 0:00:01.792 ******** 2026-03-26 05:03:56.309190 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: rbdmirrors 2026-03-26 05:03:56.309200 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: nfss 2026-03-26 05:03:56.309208 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: clients 2026-03-26 05:03:56.309217 | orchestrator | skipping: [localhost] 2026-03-26 05:03:56.309226 | orchestrator | 2026-03-26 05:03:56.309235 | orchestrator | PLAY [Gather facts and check the init system] ********************************** 2026-03-26 05:03:56.309243 | orchestrator | 2026-03-26 05:03:56.309252 | orchestrator | TASK [Gather facts on all Ceph hosts for following reference] ****************** 2026-03-26 05:03:56.309261 | orchestrator | Thursday 26 March 2026 05:02:39 +0000 (0:00:01.712) 0:00:03.505 ******** 2026-03-26 05:03:56.309270 | orchestrator | ok: [testbed-node-0] => { 2026-03-26 05:03:56.309278 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-03-26 05:03:56.309287 | orchestrator | } 2026-03-26 05:03:56.309296 | orchestrator | ok: [testbed-node-1] => { 2026-03-26 05:03:56.309304 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-03-26 05:03:56.309313 | orchestrator | } 2026-03-26 05:03:56.309322 | orchestrator | ok: [testbed-node-2] => { 2026-03-26 05:03:56.309330 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-03-26 05:03:56.309339 | orchestrator | } 2026-03-26 05:03:56.309347 | orchestrator | ok: [testbed-node-3] => { 2026-03-26 05:03:56.309356 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-03-26 05:03:56.309365 | orchestrator | } 2026-03-26 05:03:56.309374 | orchestrator | ok: [testbed-node-4] => { 2026-03-26 05:03:56.309382 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-03-26 05:03:56.309391 | orchestrator | } 2026-03-26 05:03:56.309399 | orchestrator | ok: [testbed-node-5] => { 2026-03-26 05:03:56.309408 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-03-26 05:03:56.309416 | orchestrator | } 2026-03-26 05:03:56.309425 | orchestrator | ok: [testbed-manager] => { 2026-03-26 05:03:56.309453 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-03-26 05:03:56.309462 | orchestrator | } 2026-03-26 05:03:56.309470 | orchestrator | 2026-03-26 05:03:56.309479 | orchestrator | TASK [Gather facts] ************************************************************ 2026-03-26 05:03:56.309488 | orchestrator | Thursday 26 March 2026 05:02:47 +0000 (0:00:07.308) 0:00:10.813 ******** 2026-03-26 05:03:56.309496 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:03:56.309504 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:03:56.309513 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:03:56.309522 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:03:56.309530 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:03:56.309539 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:03:56.309547 | orchestrator | ok: [testbed-manager] 2026-03-26 05:03:56.309556 | orchestrator | 2026-03-26 05:03:56.309566 | orchestrator | TASK [Gather and delegate facts] *********************************************** 2026-03-26 05:03:56.309576 | orchestrator | Thursday 26 March 2026 05:02:53 +0000 (0:00:06.298) 0:00:17.111 ******** 2026-03-26 05:03:56.309587 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-26 05:03:56.309597 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-26 05:03:56.309607 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-26 05:03:56.309617 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-26 05:03:56.309627 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-26 05:03:56.309637 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-26 05:03:56.309647 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-26 05:03:56.309657 | orchestrator | 2026-03-26 05:03:56.309666 | orchestrator | TASK [Set_fact rolling_update] ************************************************* 2026-03-26 05:03:56.309676 | orchestrator | Thursday 26 March 2026 05:03:24 +0000 (0:00:31.499) 0:00:48.611 ******** 2026-03-26 05:03:56.309687 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:03:56.309697 | orchestrator | ok: [testbed-node-1] 2026-03-26 05:03:56.309707 | orchestrator | ok: [testbed-node-2] 2026-03-26 05:03:56.309717 | orchestrator | ok: [testbed-node-3] 2026-03-26 05:03:56.309727 | orchestrator | ok: [testbed-node-4] 2026-03-26 05:03:56.309736 | orchestrator | ok: [testbed-node-5] 2026-03-26 05:03:56.309746 | orchestrator | ok: [testbed-manager] 2026-03-26 05:03:56.309755 | orchestrator | 2026-03-26 05:03:56.309766 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-26 05:03:56.309776 | orchestrator | Thursday 26 March 2026 05:03:27 +0000 (0:00:02.108) 0:00:50.719 ******** 2026-03-26 05:03:56.309787 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2026-03-26 05:03:56.309797 | orchestrator | 2026-03-26 05:03:56.309807 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-26 05:03:56.309817 | orchestrator | Thursday 26 March 2026 05:03:29 +0000 (0:00:02.740) 0:00:53.460 ******** 2026-03-26 05:03:56.309827 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:03:56.309836 | orchestrator | ok: [testbed-node-2] 2026-03-26 05:03:56.309846 | orchestrator | ok: [testbed-node-1] 2026-03-26 05:03:56.309856 | orchestrator | ok: [testbed-node-3] 2026-03-26 05:03:56.309866 | orchestrator | ok: [testbed-node-4] 2026-03-26 05:03:56.309876 | orchestrator | ok: [testbed-node-5] 2026-03-26 05:03:56.309886 | orchestrator | ok: [testbed-manager] 2026-03-26 05:03:56.309895 | orchestrator | 2026-03-26 05:03:56.309918 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-26 05:03:56.309927 | orchestrator | Thursday 26 March 2026 05:03:32 +0000 (0:00:02.663) 0:00:56.123 ******** 2026-03-26 05:03:56.309936 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:03:56.309944 | orchestrator | ok: [testbed-node-1] 2026-03-26 05:03:56.309959 | orchestrator | ok: [testbed-node-2] 2026-03-26 05:03:56.309968 | orchestrator | ok: [testbed-node-3] 2026-03-26 05:03:56.309976 | orchestrator | ok: [testbed-node-4] 2026-03-26 05:03:56.309984 | orchestrator | ok: [testbed-node-5] 2026-03-26 05:03:56.309993 | orchestrator | ok: [testbed-manager] 2026-03-26 05:03:56.310001 | orchestrator | 2026-03-26 05:03:56.310010 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-26 05:03:56.310074 | orchestrator | Thursday 26 March 2026 05:03:34 +0000 (0:00:01.965) 0:00:58.089 ******** 2026-03-26 05:03:56.310083 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:03:56.310092 | orchestrator | ok: [testbed-node-1] 2026-03-26 05:03:56.310101 | orchestrator | ok: [testbed-node-2] 2026-03-26 05:03:56.310144 | orchestrator | ok: [testbed-node-3] 2026-03-26 05:03:56.310154 | orchestrator | ok: [testbed-node-4] 2026-03-26 05:03:56.310162 | orchestrator | ok: [testbed-node-5] 2026-03-26 05:03:56.310171 | orchestrator | ok: [testbed-manager] 2026-03-26 05:03:56.310180 | orchestrator | 2026-03-26 05:03:56.310188 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-26 05:03:56.310197 | orchestrator | Thursday 26 March 2026 05:03:36 +0000 (0:00:02.538) 0:01:00.627 ******** 2026-03-26 05:03:56.310205 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:03:56.310214 | orchestrator | ok: [testbed-node-1] 2026-03-26 05:03:56.310222 | orchestrator | ok: [testbed-node-2] 2026-03-26 05:03:56.310231 | orchestrator | ok: [testbed-node-3] 2026-03-26 05:03:56.310239 | orchestrator | ok: [testbed-node-4] 2026-03-26 05:03:56.310248 | orchestrator | ok: [testbed-node-5] 2026-03-26 05:03:56.310256 | orchestrator | ok: [testbed-manager] 2026-03-26 05:03:56.310265 | orchestrator | 2026-03-26 05:03:56.310273 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-26 05:03:56.310324 | orchestrator | Thursday 26 March 2026 05:03:38 +0000 (0:00:01.987) 0:01:02.615 ******** 2026-03-26 05:03:56.310338 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:03:56.310347 | orchestrator | ok: [testbed-node-1] 2026-03-26 05:03:56.310356 | orchestrator | ok: [testbed-node-2] 2026-03-26 05:03:56.310364 | orchestrator | ok: [testbed-node-3] 2026-03-26 05:03:56.310372 | orchestrator | ok: [testbed-node-4] 2026-03-26 05:03:56.310381 | orchestrator | ok: [testbed-node-5] 2026-03-26 05:03:56.310389 | orchestrator | ok: [testbed-manager] 2026-03-26 05:03:56.310398 | orchestrator | 2026-03-26 05:03:56.310407 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-26 05:03:56.310415 | orchestrator | Thursday 26 March 2026 05:03:41 +0000 (0:00:02.137) 0:01:04.752 ******** 2026-03-26 05:03:56.310424 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:03:56.310432 | orchestrator | ok: [testbed-node-1] 2026-03-26 05:03:56.310441 | orchestrator | ok: [testbed-node-2] 2026-03-26 05:03:56.310449 | orchestrator | ok: [testbed-node-3] 2026-03-26 05:03:56.310457 | orchestrator | ok: [testbed-node-4] 2026-03-26 05:03:56.310466 | orchestrator | ok: [testbed-node-5] 2026-03-26 05:03:56.310475 | orchestrator | ok: [testbed-manager] 2026-03-26 05:03:56.310483 | orchestrator | 2026-03-26 05:03:56.310492 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-26 05:03:56.310500 | orchestrator | Thursday 26 March 2026 05:03:43 +0000 (0:00:01.905) 0:01:06.658 ******** 2026-03-26 05:03:56.310509 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:03:56.310517 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:03:56.310526 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:03:56.310534 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:03:56.310542 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:03:56.310551 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:03:56.310559 | orchestrator | skipping: [testbed-manager] 2026-03-26 05:03:56.310568 | orchestrator | 2026-03-26 05:03:56.310576 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-26 05:03:56.310585 | orchestrator | Thursday 26 March 2026 05:03:45 +0000 (0:00:02.534) 0:01:09.192 ******** 2026-03-26 05:03:56.310593 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:03:56.310602 | orchestrator | ok: [testbed-node-1] 2026-03-26 05:03:56.310618 | orchestrator | ok: [testbed-node-2] 2026-03-26 05:03:56.310626 | orchestrator | ok: [testbed-node-3] 2026-03-26 05:03:56.310635 | orchestrator | ok: [testbed-node-4] 2026-03-26 05:03:56.310643 | orchestrator | ok: [testbed-node-5] 2026-03-26 05:03:56.310652 | orchestrator | ok: [testbed-manager] 2026-03-26 05:03:56.310660 | orchestrator | 2026-03-26 05:03:56.310669 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-26 05:03:56.310677 | orchestrator | Thursday 26 March 2026 05:03:47 +0000 (0:00:02.237) 0:01:11.430 ******** 2026-03-26 05:03:56.310686 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-26 05:03:56.310694 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-26 05:03:56.310703 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-26 05:03:56.310711 | orchestrator | 2026-03-26 05:03:56.310719 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-26 05:03:56.310728 | orchestrator | Thursday 26 March 2026 05:03:49 +0000 (0:00:01.661) 0:01:13.092 ******** 2026-03-26 05:03:56.310737 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:03:56.310745 | orchestrator | ok: [testbed-node-1] 2026-03-26 05:03:56.310754 | orchestrator | ok: [testbed-node-2] 2026-03-26 05:03:56.310762 | orchestrator | ok: [testbed-node-3] 2026-03-26 05:03:56.310771 | orchestrator | ok: [testbed-node-4] 2026-03-26 05:03:56.310779 | orchestrator | ok: [testbed-node-5] 2026-03-26 05:03:56.310788 | orchestrator | ok: [testbed-manager] 2026-03-26 05:03:56.310796 | orchestrator | 2026-03-26 05:03:56.310805 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-26 05:03:56.310813 | orchestrator | Thursday 26 March 2026 05:03:51 +0000 (0:00:02.119) 0:01:15.211 ******** 2026-03-26 05:03:56.310822 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-26 05:03:56.310831 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-26 05:03:56.310839 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-26 05:03:56.310848 | orchestrator | 2026-03-26 05:03:56.310856 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-26 05:03:56.310864 | orchestrator | Thursday 26 March 2026 05:03:54 +0000 (0:00:03.307) 0:01:18.519 ******** 2026-03-26 05:03:56.310880 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-26 05:04:18.618004 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-26 05:04:18.618279 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-26 05:04:18.618299 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:04:18.618312 | orchestrator | 2026-03-26 05:04:18.618325 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-26 05:04:18.618338 | orchestrator | Thursday 26 March 2026 05:03:56 +0000 (0:00:01.437) 0:01:19.956 ******** 2026-03-26 05:04:18.618350 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-26 05:04:18.618365 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-26 05:04:18.618376 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-26 05:04:18.618387 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:04:18.618398 | orchestrator | 2026-03-26 05:04:18.618410 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-26 05:04:18.618437 | orchestrator | Thursday 26 March 2026 05:03:58 +0000 (0:00:01.935) 0:01:21.892 ******** 2026-03-26 05:04:18.618474 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-26 05:04:18.618490 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-26 05:04:18.618502 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-26 05:04:18.618513 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:04:18.618524 | orchestrator | 2026-03-26 05:04:18.618535 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-26 05:04:18.618546 | orchestrator | Thursday 26 March 2026 05:03:59 +0000 (0:00:01.194) 0:01:23.086 ******** 2026-03-26 05:04:18.618561 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'c1b85917b265', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-26 05:03:52.185577', 'end': '2026-03-26 05:03:52.243863', 'delta': '0:00:00.058286', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['c1b85917b265'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-26 05:04:18.618600 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '1fb5a820b9f6', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-26 05:03:53.038758', 'end': '2026-03-26 05:03:53.088240', 'delta': '0:00:00.049482', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['1fb5a820b9f6'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-26 05:04:18.618616 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '2a382ea60872', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-26 05:03:53.633676', 'end': '2026-03-26 05:03:53.688108', 'delta': '0:00:00.054432', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['2a382ea60872'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-26 05:04:18.618628 | orchestrator | 2026-03-26 05:04:18.618648 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-26 05:04:18.618661 | orchestrator | Thursday 26 March 2026 05:04:00 +0000 (0:00:01.242) 0:01:24.329 ******** 2026-03-26 05:04:18.618673 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:04:18.618686 | orchestrator | ok: [testbed-node-1] 2026-03-26 05:04:18.618699 | orchestrator | ok: [testbed-node-2] 2026-03-26 05:04:18.618712 | orchestrator | ok: [testbed-node-3] 2026-03-26 05:04:18.618724 | orchestrator | ok: [testbed-node-4] 2026-03-26 05:04:18.618742 | orchestrator | ok: [testbed-node-5] 2026-03-26 05:04:18.618756 | orchestrator | ok: [testbed-manager] 2026-03-26 05:04:18.618769 | orchestrator | 2026-03-26 05:04:18.618780 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-26 05:04:18.618791 | orchestrator | Thursday 26 March 2026 05:04:02 +0000 (0:00:02.174) 0:01:26.504 ******** 2026-03-26 05:04:18.618802 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:04:18.618813 | orchestrator | 2026-03-26 05:04:18.618824 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-26 05:04:18.618835 | orchestrator | Thursday 26 March 2026 05:04:04 +0000 (0:00:01.279) 0:01:27.784 ******** 2026-03-26 05:04:18.618845 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:04:18.618856 | orchestrator | ok: [testbed-node-1] 2026-03-26 05:04:18.618867 | orchestrator | ok: [testbed-node-2] 2026-03-26 05:04:18.618877 | orchestrator | ok: [testbed-node-3] 2026-03-26 05:04:18.618888 | orchestrator | ok: [testbed-node-4] 2026-03-26 05:04:18.618898 | orchestrator | ok: [testbed-node-5] 2026-03-26 05:04:18.618909 | orchestrator | ok: [testbed-manager] 2026-03-26 05:04:18.618920 | orchestrator | 2026-03-26 05:04:18.618931 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-26 05:04:18.618941 | orchestrator | Thursday 26 March 2026 05:04:06 +0000 (0:00:02.152) 0:01:29.937 ******** 2026-03-26 05:04:18.618952 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:04:18.618963 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] 2026-03-26 05:04:18.618974 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-03-26 05:04:18.618984 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-03-26 05:04:18.618995 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-26 05:04:18.619006 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-26 05:04:18.619016 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-03-26 05:04:18.619027 | orchestrator | 2026-03-26 05:04:18.619038 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-26 05:04:18.619048 | orchestrator | Thursday 26 March 2026 05:04:09 +0000 (0:00:03.367) 0:01:33.305 ******** 2026-03-26 05:04:18.619059 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:04:18.619070 | orchestrator | ok: [testbed-node-1] 2026-03-26 05:04:18.619081 | orchestrator | ok: [testbed-node-2] 2026-03-26 05:04:18.619091 | orchestrator | ok: [testbed-node-3] 2026-03-26 05:04:18.619102 | orchestrator | ok: [testbed-node-4] 2026-03-26 05:04:18.619113 | orchestrator | ok: [testbed-node-5] 2026-03-26 05:04:18.619123 | orchestrator | ok: [testbed-manager] 2026-03-26 05:04:18.619134 | orchestrator | 2026-03-26 05:04:18.619145 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-26 05:04:18.619156 | orchestrator | Thursday 26 March 2026 05:04:11 +0000 (0:00:02.171) 0:01:35.477 ******** 2026-03-26 05:04:18.619215 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:04:18.619228 | orchestrator | 2026-03-26 05:04:18.619239 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-26 05:04:18.619250 | orchestrator | Thursday 26 March 2026 05:04:12 +0000 (0:00:01.175) 0:01:36.652 ******** 2026-03-26 05:04:18.619261 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:04:18.619271 | orchestrator | 2026-03-26 05:04:18.619282 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-26 05:04:18.619293 | orchestrator | Thursday 26 March 2026 05:04:14 +0000 (0:00:01.207) 0:01:37.859 ******** 2026-03-26 05:04:18.619304 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:04:18.619322 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:04:18.619333 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:04:18.619344 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:04:18.619354 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:04:18.619365 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:04:18.619376 | orchestrator | skipping: [testbed-manager] 2026-03-26 05:04:18.619386 | orchestrator | 2026-03-26 05:04:18.619397 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-26 05:04:18.619408 | orchestrator | Thursday 26 March 2026 05:04:16 +0000 (0:00:02.518) 0:01:40.378 ******** 2026-03-26 05:04:18.619419 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:04:18.619430 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:04:18.619440 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:04:18.619451 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:04:18.619462 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:04:18.619473 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:04:18.619492 | orchestrator | skipping: [testbed-manager] 2026-03-26 05:04:29.084585 | orchestrator | 2026-03-26 05:04:29.084705 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-26 05:04:29.084722 | orchestrator | Thursday 26 March 2026 05:04:18 +0000 (0:00:01.888) 0:01:42.266 ******** 2026-03-26 05:04:29.084769 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:04:29.084782 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:04:29.084792 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:04:29.084801 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:04:29.084810 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:04:29.084818 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:04:29.084827 | orchestrator | skipping: [testbed-manager] 2026-03-26 05:04:29.084836 | orchestrator | 2026-03-26 05:04:29.084846 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-26 05:04:29.084855 | orchestrator | Thursday 26 March 2026 05:04:20 +0000 (0:00:02.085) 0:01:44.352 ******** 2026-03-26 05:04:29.084863 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:04:29.084872 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:04:29.084881 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:04:29.084889 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:04:29.084898 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:04:29.084906 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:04:29.084914 | orchestrator | skipping: [testbed-manager] 2026-03-26 05:04:29.084923 | orchestrator | 2026-03-26 05:04:29.084932 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-26 05:04:29.084940 | orchestrator | Thursday 26 March 2026 05:04:22 +0000 (0:00:01.949) 0:01:46.301 ******** 2026-03-26 05:04:29.084949 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:04:29.084957 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:04:29.084966 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:04:29.084975 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:04:29.084983 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:04:29.085006 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:04:29.085015 | orchestrator | skipping: [testbed-manager] 2026-03-26 05:04:29.085023 | orchestrator | 2026-03-26 05:04:29.085032 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-26 05:04:29.085041 | orchestrator | Thursday 26 March 2026 05:04:24 +0000 (0:00:02.102) 0:01:48.404 ******** 2026-03-26 05:04:29.085049 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:04:29.085058 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:04:29.085066 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:04:29.085075 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:04:29.085083 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:04:29.085092 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:04:29.085100 | orchestrator | skipping: [testbed-manager] 2026-03-26 05:04:29.085109 | orchestrator | 2026-03-26 05:04:29.085138 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-26 05:04:29.085151 | orchestrator | Thursday 26 March 2026 05:04:26 +0000 (0:00:02.017) 0:01:50.421 ******** 2026-03-26 05:04:29.085161 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:04:29.085171 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:04:29.085181 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:04:29.085241 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:04:29.085257 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:04:29.085272 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:04:29.085287 | orchestrator | skipping: [testbed-manager] 2026-03-26 05:04:29.085298 | orchestrator | 2026-03-26 05:04:29.085308 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-26 05:04:29.085319 | orchestrator | Thursday 26 March 2026 05:04:28 +0000 (0:00:02.134) 0:01:52.556 ******** 2026-03-26 05:04:29.085332 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:04:29.085346 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:04:29.085358 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:04:29.085388 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-26-01-38-12-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-26 05:04:29.085402 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:04:29.085412 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:04:29.085428 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:04:29.085454 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c374eb4c-3572-4f0b-927c-38d35765f44a', 'scsi-SQEMU_QEMU_HARDDISK_c374eb4c-3572-4f0b-927c-38d35765f44a'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'c374eb4c', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c374eb4c-3572-4f0b-927c-38d35765f44a-part16', 'scsi-SQEMU_QEMU_HARDDISK_c374eb4c-3572-4f0b-927c-38d35765f44a-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c374eb4c-3572-4f0b-927c-38d35765f44a-part14', 'scsi-SQEMU_QEMU_HARDDISK_c374eb4c-3572-4f0b-927c-38d35765f44a-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c374eb4c-3572-4f0b-927c-38d35765f44a-part15', 'scsi-SQEMU_QEMU_HARDDISK_c374eb4c-3572-4f0b-927c-38d35765f44a-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c374eb4c-3572-4f0b-927c-38d35765f44a-part1', 'scsi-SQEMU_QEMU_HARDDISK_c374eb4c-3572-4f0b-927c-38d35765f44a-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-26 05:04:29.085469 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:04:29.085486 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:04:29.452818 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:04:29.452917 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:04:29.452936 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:04:29.452986 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:04:29.453000 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-26-01-38-16-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-26 05:04:29.453015 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:04:29.453026 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:04:29.453038 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:04:29.453077 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2e41bcf9-ad92-42bb-b49e-289ca95def9f', 'scsi-SQEMU_QEMU_HARDDISK_2e41bcf9-ad92-42bb-b49e-289ca95def9f'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2e41bcf9', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2e41bcf9-ad92-42bb-b49e-289ca95def9f-part16', 'scsi-SQEMU_QEMU_HARDDISK_2e41bcf9-ad92-42bb-b49e-289ca95def9f-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2e41bcf9-ad92-42bb-b49e-289ca95def9f-part14', 'scsi-SQEMU_QEMU_HARDDISK_2e41bcf9-ad92-42bb-b49e-289ca95def9f-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2e41bcf9-ad92-42bb-b49e-289ca95def9f-part15', 'scsi-SQEMU_QEMU_HARDDISK_2e41bcf9-ad92-42bb-b49e-289ca95def9f-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2e41bcf9-ad92-42bb-b49e-289ca95def9f-part1', 'scsi-SQEMU_QEMU_HARDDISK_2e41bcf9-ad92-42bb-b49e-289ca95def9f-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-26 05:04:29.453100 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:04:29.453112 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:04:29.453123 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:04:29.453135 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:04:29.453146 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:04:29.453158 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:04:29.453169 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-26-01-38-10-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-26 05:04:29.453256 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:04:29.827820 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:04:29.827932 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:04:29.827945 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7634648a-b5a4-45bc-ac0b-8484a2642b22', 'scsi-SQEMU_QEMU_HARDDISK_7634648a-b5a4-45bc-ac0b-8484a2642b22'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '7634648a', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7634648a-b5a4-45bc-ac0b-8484a2642b22-part16', 'scsi-SQEMU_QEMU_HARDDISK_7634648a-b5a4-45bc-ac0b-8484a2642b22-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7634648a-b5a4-45bc-ac0b-8484a2642b22-part14', 'scsi-SQEMU_QEMU_HARDDISK_7634648a-b5a4-45bc-ac0b-8484a2642b22-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7634648a-b5a4-45bc-ac0b-8484a2642b22-part15', 'scsi-SQEMU_QEMU_HARDDISK_7634648a-b5a4-45bc-ac0b-8484a2642b22-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7634648a-b5a4-45bc-ac0b-8484a2642b22-part1', 'scsi-SQEMU_QEMU_HARDDISK_7634648a-b5a4-45bc-ac0b-8484a2642b22-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-26 05:04:29.827955 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:04:29.827961 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:04:29.827968 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:04:29.827989 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:04:29.828005 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--e2623153--bc41--510f--8884--ef957bb96082-osd--block--e2623153--bc41--510f--8884--ef957bb96082', 'dm-uuid-LVM-8hKVl461SF70Ai5uMDmNdT5BP20Vvkg8AxHs2aTbdloCZd5zRhurro2iqvFnFzRY'], 'uuids': ['c579629d-afc9-41d5-a76c-63e3abbafb40'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '863ba5d2', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['AxHs2a-Tbdl-oCZd-5zRh-urro-2iqv-FnFzRY']}})  2026-03-26 05:04:29.828013 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2dae49df-17cb-48b5-9940-ec5e7ec792d8', 'scsi-SQEMU_QEMU_HARDDISK_2dae49df-17cb-48b5-9940-ec5e7ec792d8'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2dae49df', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-26 05:04:29.828021 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-2XKfyD-kvYx-XaUk-IA1D-OFMu-auWL-FeQHCw', 'scsi-0QEMU_QEMU_HARDDISK_d11e4e4a-db1d-44df-8da9-5de7e993dd80', 'scsi-SQEMU_QEMU_HARDDISK_d11e4e4a-db1d-44df-8da9-5de7e993dd80'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'd11e4e4a', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--93e8c9a2--b6ff--5fe0--a79e--2922336c3e0a-osd--block--93e8c9a2--b6ff--5fe0--a79e--2922336c3e0a']}})  2026-03-26 05:04:29.828029 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:04:29.828035 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:04:29.828041 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-26-01-38-13-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-26 05:04:29.828054 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:04:29.985480 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-y5xuVg-NQ7T-0MWa-shc4-xC7n-HJ3V-UNBCRS', 'dm-uuid-CRYPT-LUKS2-aef43475035b4229a7d71e3432ab4dcb-y5xuVg-NQ7T-0MWa-shc4-xC7n-HJ3V-UNBCRS'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-26 05:04:29.985586 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:04:29.985606 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--93e8c9a2--b6ff--5fe0--a79e--2922336c3e0a-osd--block--93e8c9a2--b6ff--5fe0--a79e--2922336c3e0a', 'dm-uuid-LVM-NfuOn4R5AkCZoZBaGfCwjgSejX4qlSlby5xuVgNQ7T0MWashc4xC7nHJ3VUNBCRS'], 'uuids': ['aef43475-035b-4229-a7d7-1e3432ab4dcb'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'd11e4e4a', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['y5xuVg-NQ7T-0MWa-shc4-xC7n-HJ3V-UNBCRS']}})  2026-03-26 05:04:29.985618 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-dxNnp3-HdCF-97hz-w17k-bHEu-opcA-g4y34j', 'scsi-0QEMU_QEMU_HARDDISK_863ba5d2-7e2f-4393-95a6-83543745d331', 'scsi-SQEMU_QEMU_HARDDISK_863ba5d2-7e2f-4393-95a6-83543745d331'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '863ba5d2', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--e2623153--bc41--510f--8884--ef957bb96082-osd--block--e2623153--bc41--510f--8884--ef957bb96082']}})  2026-03-26 05:04:29.985630 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:04:29.985665 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce600cf2-62c4-44aa-8248-5535335c6519', 'scsi-SQEMU_QEMU_HARDDISK_ce600cf2-62c4-44aa-8248-5535335c6519'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'ce600cf2', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce600cf2-62c4-44aa-8248-5535335c6519-part16', 'scsi-SQEMU_QEMU_HARDDISK_ce600cf2-62c4-44aa-8248-5535335c6519-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce600cf2-62c4-44aa-8248-5535335c6519-part14', 'scsi-SQEMU_QEMU_HARDDISK_ce600cf2-62c4-44aa-8248-5535335c6519-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce600cf2-62c4-44aa-8248-5535335c6519-part15', 'scsi-SQEMU_QEMU_HARDDISK_ce600cf2-62c4-44aa-8248-5535335c6519-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce600cf2-62c4-44aa-8248-5535335c6519-part1', 'scsi-SQEMU_QEMU_HARDDISK_ce600cf2-62c4-44aa-8248-5535335c6519-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-26 05:04:29.985696 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:04:29.985707 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:04:29.985717 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-AxHs2a-Tbdl-oCZd-5zRh-urro-2iqv-FnFzRY', 'dm-uuid-CRYPT-LUKS2-c579629dafc941d5a76c63e3abbafb40-AxHs2a-Tbdl-oCZd-5zRh-urro-2iqv-FnFzRY'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-26 05:04:29.985727 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:04:29.985739 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:04:29.985749 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:04:29.985759 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--b5eee7c3--8883--5bbe--be5a--75726e822543-osd--block--b5eee7c3--8883--5bbe--be5a--75726e822543', 'dm-uuid-LVM-O1aEkSX5V2TgXKGnqX2peNd9dQhi04NAZJyEqlgfRLjtJKN8JwRgDI1ZPO4R3wgt'], 'uuids': ['1d39f6c5-1f6c-4630-99cd-a410ca5e45d8'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'a52ec37c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['ZJyEql-gfRL-jtJK-N8Jw-RgDI-1ZPO-4R3wgt']}})  2026-03-26 05:04:29.985787 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--1fd8de68--da37--5e01--9bf2--5a04fcdcd771-osd--block--1fd8de68--da37--5e01--9bf2--5a04fcdcd771', 'dm-uuid-LVM-Q7trkX6T9bQrenPM1EuezeEWG2QB7ffx0bNZRnQ3R81VwJTdPWktYtRAGSsXVFlp'], 'uuids': ['958c3d71-9b3b-484b-8cbf-f174ba1f6fac'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '47760649', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['0bNZRn-Q3R8-1VwJ-TdPW-ktYt-RAGS-sXVFlp']}})  2026-03-26 05:04:30.016325 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7e352b46-e023-45cf-8a88-51cc46240a44', 'scsi-SQEMU_QEMU_HARDDISK_7e352b46-e023-45cf-8a88-51cc46240a44'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '7e352b46', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-26 05:04:30.016411 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8ddd7966-84e6-4951-8a08-7b4fb4af2bd2', 'scsi-SQEMU_QEMU_HARDDISK_8ddd7966-84e6-4951-8a08-7b4fb4af2bd2'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '8ddd7966', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-26 05:04:30.016421 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-eoBjP8-dDdJ-3FQm-pH7P-5B72-c1L3-mABWfX', 'scsi-0QEMU_QEMU_HARDDISK_7db5f133-fe7b-42a4-ad57-b076dc1856ab', 'scsi-SQEMU_QEMU_HARDDISK_7db5f133-fe7b-42a4-ad57-b076dc1856ab'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '7db5f133', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--a652979e--9f40--503a--bbc8--6de5e605991e-osd--block--a652979e--9f40--503a--bbc8--6de5e605991e']}})  2026-03-26 05:04:30.016431 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-FriUOI-gUEr-kmP0-nYC7-MoO0-ng3W-Ej90o7', 'scsi-0QEMU_QEMU_HARDDISK_943c088c-5b56-4173-ab64-ec81e1cc816d', 'scsi-SQEMU_QEMU_HARDDISK_943c088c-5b56-4173-ab64-ec81e1cc816d'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '943c088c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--83c4def8--4703--5f7c--9549--7666ff9f2b66-osd--block--83c4def8--4703--5f7c--9549--7666ff9f2b66']}})  2026-03-26 05:04:30.016457 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:04:30.016467 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:04:30.016503 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:04:30.016512 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:04:30.016519 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-26-01-38-09-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-26 05:04:30.016528 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-26-01-38-15-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-26 05:04:30.016539 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:04:30.016550 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:04:30.016567 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-2h2U7S-LeR6-PGwr-u1xY-81U9-rrCs-8siESG', 'dm-uuid-CRYPT-LUKS2-741ece0a80b8415aa2e2dcc695db5f53-2h2U7S-LeR6-PGwr-u1xY-81U9-rrCs-8siESG'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-26 05:04:30.016578 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-A6zrFI-A863-mhHt-Ip5p-UFeD-Hxho-mhuceD', 'dm-uuid-CRYPT-LUKS2-4b88786507c84424981e8c33baf61cbe-A6zrFI-A863-mhHt-Ip5p-UFeD-Hxho-mhuceD'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-26 05:04:30.016588 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:04:30.016612 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:04:30.107121 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--83c4def8--4703--5f7c--9549--7666ff9f2b66-osd--block--83c4def8--4703--5f7c--9549--7666ff9f2b66', 'dm-uuid-LVM-DoNgv1c108dy4eu1pvS7TOCWbuA3UXv0A6zrFIA863mhHtIp5pUFeDHxhomhuceD'], 'uuids': ['4b887865-07c8-4424-981e-8c33baf61cbe'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '943c088c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['A6zrFI-A863-mhHt-Ip5p-UFeD-Hxho-mhuceD']}})  2026-03-26 05:04:30.107303 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--a652979e--9f40--503a--bbc8--6de5e605991e-osd--block--a652979e--9f40--503a--bbc8--6de5e605991e', 'dm-uuid-LVM-86WEu6duX2Pejl3asW6viK3fsh4aqvqg2h2U7SLeR6PGwru1xY81U9rrCs8siESG'], 'uuids': ['741ece0a-80b8-415a-a2e2-dcc695db5f53'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '7db5f133', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['2h2U7S-LeR6-PGwr-u1xY-81U9-rrCs-8siESG']}})  2026-03-26 05:04:30.107330 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-xgZSV6-0wfE-zGZo-XmXe-xuiN-RWM0-U4VPgB', 'scsi-0QEMU_QEMU_HARDDISK_47760649-09e9-4ed8-8303-e5ee473a8102', 'scsi-SQEMU_QEMU_HARDDISK_47760649-09e9-4ed8-8303-e5ee473a8102'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '47760649', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--1fd8de68--da37--5e01--9bf2--5a04fcdcd771-osd--block--1fd8de68--da37--5e01--9bf2--5a04fcdcd771']}})  2026-03-26 05:04:30.107378 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-Oy69b4-OcVV-F2KD-vi5G-C8ns-n3Cu-1PhYTB', 'scsi-0QEMU_QEMU_HARDDISK_a52ec37c-b4ea-4f83-9b16-3c0f6ce85263', 'scsi-SQEMU_QEMU_HARDDISK_a52ec37c-b4ea-4f83-9b16-3c0f6ce85263'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'a52ec37c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--b5eee7c3--8883--5bbe--be5a--75726e822543-osd--block--b5eee7c3--8883--5bbe--be5a--75726e822543']}})  2026-03-26 05:04:30.107389 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:04:30.107416 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:04:30.107450 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48d73a84-835d-480a-92c3-3edf7ed142ea', 'scsi-SQEMU_QEMU_HARDDISK_48d73a84-835d-480a-92c3-3edf7ed142ea'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '48d73a84', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48d73a84-835d-480a-92c3-3edf7ed142ea-part16', 'scsi-SQEMU_QEMU_HARDDISK_48d73a84-835d-480a-92c3-3edf7ed142ea-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48d73a84-835d-480a-92c3-3edf7ed142ea-part14', 'scsi-SQEMU_QEMU_HARDDISK_48d73a84-835d-480a-92c3-3edf7ed142ea-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48d73a84-835d-480a-92c3-3edf7ed142ea-part15', 'scsi-SQEMU_QEMU_HARDDISK_48d73a84-835d-480a-92c3-3edf7ed142ea-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48d73a84-835d-480a-92c3-3edf7ed142ea-part1', 'scsi-SQEMU_QEMU_HARDDISK_48d73a84-835d-480a-92c3-3edf7ed142ea-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-26 05:04:30.107474 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4fa924fa-33d9-43ce-b208-159d6f6ab539', 'scsi-SQEMU_QEMU_HARDDISK_4fa924fa-33d9-43ce-b208-159d6f6ab539'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '4fa924fa', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4fa924fa-33d9-43ce-b208-159d6f6ab539-part16', 'scsi-SQEMU_QEMU_HARDDISK_4fa924fa-33d9-43ce-b208-159d6f6ab539-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4fa924fa-33d9-43ce-b208-159d6f6ab539-part14', 'scsi-SQEMU_QEMU_HARDDISK_4fa924fa-33d9-43ce-b208-159d6f6ab539-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4fa924fa-33d9-43ce-b208-159d6f6ab539-part15', 'scsi-SQEMU_QEMU_HARDDISK_4fa924fa-33d9-43ce-b208-159d6f6ab539-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4fa924fa-33d9-43ce-b208-159d6f6ab539-part1', 'scsi-SQEMU_QEMU_HARDDISK_4fa924fa-33d9-43ce-b208-159d6f6ab539-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-26 05:04:30.107493 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:04:31.441551 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:04:31.441673 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ZJyEql-gfRL-jtJK-N8Jw-RgDI-1ZPO-4R3wgt', 'dm-uuid-CRYPT-LUKS2-1d39f6c51f6c463099cda410ca5e45d8-ZJyEql-gfRL-jtJK-N8Jw-RgDI-1ZPO-4R3wgt'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-26 05:04:31.441703 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:04:31.441756 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:04:31.441771 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:04:31.441783 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-0bNZRn-Q3R8-1VwJ-TdPW-ktYt-RAGS-sXVFlp', 'dm-uuid-CRYPT-LUKS2-958c3d719b3b484b8cbff174ba1f6fac-0bNZRn-Q3R8-1VwJ-TdPW-ktYt-RAGS-sXVFlp'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-26 05:04:31.441794 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:04:31.441805 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:04:31.441839 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:04:31.441860 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:04:31.441900 | orchestrator | skipping: [testbed-manager] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-26-01-38-40-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1060', 'sectorsize': '2048', 'size': '530.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-26 05:04:31.441914 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:04:31.441925 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:04:31.441945 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:04:31.441978 | orchestrator | skipping: [testbed-manager] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2ece1d7d-b762-44e0-80cf-0ec8d4e65a06', 'scsi-SQEMU_QEMU_HARDDISK_2ece1d7d-b762-44e0-80cf-0ec8d4e65a06'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2ece1d7d', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2ece1d7d-b762-44e0-80cf-0ec8d4e65a06-part16', 'scsi-SQEMU_QEMU_HARDDISK_2ece1d7d-b762-44e0-80cf-0ec8d4e65a06-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2ece1d7d-b762-44e0-80cf-0ec8d4e65a06-part14', 'scsi-SQEMU_QEMU_HARDDISK_2ece1d7d-b762-44e0-80cf-0ec8d4e65a06-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2ece1d7d-b762-44e0-80cf-0ec8d4e65a06-part15', 'scsi-SQEMU_QEMU_HARDDISK_2ece1d7d-b762-44e0-80cf-0ec8d4e65a06-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2ece1d7d-b762-44e0-80cf-0ec8d4e65a06-part1', 'scsi-SQEMU_QEMU_HARDDISK_2ece1d7d-b762-44e0-80cf-0ec8d4e65a06-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-26 05:04:31.442001 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:04:31.442105 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:04:31.582686 | orchestrator | skipping: [testbed-manager] 2026-03-26 05:04:31.582788 | orchestrator | 2026-03-26 05:04:31.582804 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-26 05:04:31.582838 | orchestrator | Thursday 26 March 2026 05:04:31 +0000 (0:00:02.526) 0:01:55.083 ******** 2026-03-26 05:04:31.582854 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:04:31.582869 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:04:31.582880 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:04:31.582893 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-26-01-38-12-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:04:31.582920 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:04:31.582932 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:04:31.582969 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:04:31.582993 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c374eb4c-3572-4f0b-927c-38d35765f44a', 'scsi-SQEMU_QEMU_HARDDISK_c374eb4c-3572-4f0b-927c-38d35765f44a'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'c374eb4c', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c374eb4c-3572-4f0b-927c-38d35765f44a-part16', 'scsi-SQEMU_QEMU_HARDDISK_c374eb4c-3572-4f0b-927c-38d35765f44a-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c374eb4c-3572-4f0b-927c-38d35765f44a-part14', 'scsi-SQEMU_QEMU_HARDDISK_c374eb4c-3572-4f0b-927c-38d35765f44a-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c374eb4c-3572-4f0b-927c-38d35765f44a-part15', 'scsi-SQEMU_QEMU_HARDDISK_c374eb4c-3572-4f0b-927c-38d35765f44a-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c374eb4c-3572-4f0b-927c-38d35765f44a-part1', 'scsi-SQEMU_QEMU_HARDDISK_c374eb4c-3572-4f0b-927c-38d35765f44a-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:04:31.583007 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:04:31.583026 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:04:31.912082 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:04:31.912166 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:04:31.912179 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:04:31.912186 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:04:31.912247 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-26-01-38-16-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:04:31.912268 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:04:31.912275 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:04:31.912311 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:04:31.912322 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2e41bcf9-ad92-42bb-b49e-289ca95def9f', 'scsi-SQEMU_QEMU_HARDDISK_2e41bcf9-ad92-42bb-b49e-289ca95def9f'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2e41bcf9', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2e41bcf9-ad92-42bb-b49e-289ca95def9f-part16', 'scsi-SQEMU_QEMU_HARDDISK_2e41bcf9-ad92-42bb-b49e-289ca95def9f-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2e41bcf9-ad92-42bb-b49e-289ca95def9f-part14', 'scsi-SQEMU_QEMU_HARDDISK_2e41bcf9-ad92-42bb-b49e-289ca95def9f-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2e41bcf9-ad92-42bb-b49e-289ca95def9f-part15', 'scsi-SQEMU_QEMU_HARDDISK_2e41bcf9-ad92-42bb-b49e-289ca95def9f-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2e41bcf9-ad92-42bb-b49e-289ca95def9f-part1', 'scsi-SQEMU_QEMU_HARDDISK_2e41bcf9-ad92-42bb-b49e-289ca95def9f-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:04:31.912338 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:04:31.912345 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:04:31.912356 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:04:31.912368 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:04:32.148590 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:04:32.148692 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:04:32.148708 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-26-01-38-10-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:04:32.148739 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:04:32.148751 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:04:32.148781 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:04:32.148817 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7634648a-b5a4-45bc-ac0b-8484a2642b22', 'scsi-SQEMU_QEMU_HARDDISK_7634648a-b5a4-45bc-ac0b-8484a2642b22'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '7634648a', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7634648a-b5a4-45bc-ac0b-8484a2642b22-part16', 'scsi-SQEMU_QEMU_HARDDISK_7634648a-b5a4-45bc-ac0b-8484a2642b22-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7634648a-b5a4-45bc-ac0b-8484a2642b22-part14', 'scsi-SQEMU_QEMU_HARDDISK_7634648a-b5a4-45bc-ac0b-8484a2642b22-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7634648a-b5a4-45bc-ac0b-8484a2642b22-part15', 'scsi-SQEMU_QEMU_HARDDISK_7634648a-b5a4-45bc-ac0b-8484a2642b22-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7634648a-b5a4-45bc-ac0b-8484a2642b22-part1', 'scsi-SQEMU_QEMU_HARDDISK_7634648a-b5a4-45bc-ac0b-8484a2642b22-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:04:32.148838 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:04:32.148857 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:04:32.148869 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:04:32.148884 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:04:32.148904 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--e2623153--bc41--510f--8884--ef957bb96082-osd--block--e2623153--bc41--510f--8884--ef957bb96082', 'dm-uuid-LVM-8hKVl461SF70Ai5uMDmNdT5BP20Vvkg8AxHs2aTbdloCZd5zRhurro2iqvFnFzRY'], 'uuids': ['c579629d-afc9-41d5-a76c-63e3abbafb40'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '863ba5d2', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['AxHs2a-Tbdl-oCZd-5zRh-urro-2iqv-FnFzRY']}}, 'ansible_loop_var': 'item'})  2026-03-26 05:04:32.270682 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2dae49df-17cb-48b5-9940-ec5e7ec792d8', 'scsi-SQEMU_QEMU_HARDDISK_2dae49df-17cb-48b5-9940-ec5e7ec792d8'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2dae49df', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:04:32.270831 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-2XKfyD-kvYx-XaUk-IA1D-OFMu-auWL-FeQHCw', 'scsi-0QEMU_QEMU_HARDDISK_d11e4e4a-db1d-44df-8da9-5de7e993dd80', 'scsi-SQEMU_QEMU_HARDDISK_d11e4e4a-db1d-44df-8da9-5de7e993dd80'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'd11e4e4a', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--93e8c9a2--b6ff--5fe0--a79e--2922336c3e0a-osd--block--93e8c9a2--b6ff--5fe0--a79e--2922336c3e0a']}}, 'ansible_loop_var': 'item'})  2026-03-26 05:04:32.270891 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:04:32.270914 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:04:32.270936 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-26-01-38-13-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:04:32.270978 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:04:32.270999 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-y5xuVg-NQ7T-0MWa-shc4-xC7n-HJ3V-UNBCRS', 'dm-uuid-CRYPT-LUKS2-aef43475035b4229a7d71e3432ab4dcb-y5xuVg-NQ7T-0MWa-shc4-xC7n-HJ3V-UNBCRS'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:04:32.271027 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:04:32.271058 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--93e8c9a2--b6ff--5fe0--a79e--2922336c3e0a-osd--block--93e8c9a2--b6ff--5fe0--a79e--2922336c3e0a', 'dm-uuid-LVM-NfuOn4R5AkCZoZBaGfCwjgSejX4qlSlby5xuVgNQ7T0MWashc4xC7nHJ3VUNBCRS'], 'uuids': ['aef43475-035b-4229-a7d7-1e3432ab4dcb'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'd11e4e4a', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['y5xuVg-NQ7T-0MWa-shc4-xC7n-HJ3V-UNBCRS']}}, 'ansible_loop_var': 'item'})  2026-03-26 05:04:32.271079 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-dxNnp3-HdCF-97hz-w17k-bHEu-opcA-g4y34j', 'scsi-0QEMU_QEMU_HARDDISK_863ba5d2-7e2f-4393-95a6-83543745d331', 'scsi-SQEMU_QEMU_HARDDISK_863ba5d2-7e2f-4393-95a6-83543745d331'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '863ba5d2', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--e2623153--bc41--510f--8884--ef957bb96082-osd--block--e2623153--bc41--510f--8884--ef957bb96082']}}, 'ansible_loop_var': 'item'})  2026-03-26 05:04:32.271108 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:04:32.299564 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:04:32.299662 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce600cf2-62c4-44aa-8248-5535335c6519', 'scsi-SQEMU_QEMU_HARDDISK_ce600cf2-62c4-44aa-8248-5535335c6519'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'ce600cf2', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce600cf2-62c4-44aa-8248-5535335c6519-part16', 'scsi-SQEMU_QEMU_HARDDISK_ce600cf2-62c4-44aa-8248-5535335c6519-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce600cf2-62c4-44aa-8248-5535335c6519-part14', 'scsi-SQEMU_QEMU_HARDDISK_ce600cf2-62c4-44aa-8248-5535335c6519-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce600cf2-62c4-44aa-8248-5535335c6519-part15', 'scsi-SQEMU_QEMU_HARDDISK_ce600cf2-62c4-44aa-8248-5535335c6519-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce600cf2-62c4-44aa-8248-5535335c6519-part1', 'scsi-SQEMU_QEMU_HARDDISK_ce600cf2-62c4-44aa-8248-5535335c6519-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:04:32.299708 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--b5eee7c3--8883--5bbe--be5a--75726e822543-osd--block--b5eee7c3--8883--5bbe--be5a--75726e822543', 'dm-uuid-LVM-O1aEkSX5V2TgXKGnqX2peNd9dQhi04NAZJyEqlgfRLjtJKN8JwRgDI1ZPO4R3wgt'], 'uuids': ['1d39f6c5-1f6c-4630-99cd-a410ca5e45d8'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'a52ec37c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['ZJyEql-gfRL-jtJK-N8Jw-RgDI-1ZPO-4R3wgt']}}, 'ansible_loop_var': 'item'})  2026-03-26 05:04:32.299738 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:04:32.299751 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7e352b46-e023-45cf-8a88-51cc46240a44', 'scsi-SQEMU_QEMU_HARDDISK_7e352b46-e023-45cf-8a88-51cc46240a44'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '7e352b46', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:04:32.299776 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:04:32.299788 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-eoBjP8-dDdJ-3FQm-pH7P-5B72-c1L3-mABWfX', 'scsi-0QEMU_QEMU_HARDDISK_7db5f133-fe7b-42a4-ad57-b076dc1856ab', 'scsi-SQEMU_QEMU_HARDDISK_7db5f133-fe7b-42a4-ad57-b076dc1856ab'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '7db5f133', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--a652979e--9f40--503a--bbc8--6de5e605991e-osd--block--a652979e--9f40--503a--bbc8--6de5e605991e']}}, 'ansible_loop_var': 'item'})  2026-03-26 05:04:32.299801 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-AxHs2a-Tbdl-oCZd-5zRh-urro-2iqv-FnFzRY', 'dm-uuid-CRYPT-LUKS2-c579629dafc941d5a76c63e3abbafb40-AxHs2a-Tbdl-oCZd-5zRh-urro-2iqv-FnFzRY'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:04:32.299820 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:04:32.420950 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:04:32.421047 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:04:32.421080 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-26-01-38-09-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:04:32.421117 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:04:32.421128 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-2h2U7S-LeR6-PGwr-u1xY-81U9-rrCs-8siESG', 'dm-uuid-CRYPT-LUKS2-741ece0a80b8415aa2e2dcc695db5f53-2h2U7S-LeR6-PGwr-u1xY-81U9-rrCs-8siESG'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:04:32.421139 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:04:32.421167 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--a652979e--9f40--503a--bbc8--6de5e605991e-osd--block--a652979e--9f40--503a--bbc8--6de5e605991e', 'dm-uuid-LVM-86WEu6duX2Pejl3asW6viK3fsh4aqvqg2h2U7SLeR6PGwru1xY81U9rrCs8siESG'], 'uuids': ['741ece0a-80b8-415a-a2e2-dcc695db5f53'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '7db5f133', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['2h2U7S-LeR6-PGwr-u1xY-81U9-rrCs-8siESG']}}, 'ansible_loop_var': 'item'})  2026-03-26 05:04:32.421180 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-Oy69b4-OcVV-F2KD-vi5G-C8ns-n3Cu-1PhYTB', 'scsi-0QEMU_QEMU_HARDDISK_a52ec37c-b4ea-4f83-9b16-3c0f6ce85263', 'scsi-SQEMU_QEMU_HARDDISK_a52ec37c-b4ea-4f83-9b16-3c0f6ce85263'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'a52ec37c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--b5eee7c3--8883--5bbe--be5a--75726e822543-osd--block--b5eee7c3--8883--5bbe--be5a--75726e822543']}}, 'ansible_loop_var': 'item'})  2026-03-26 05:04:32.421258 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:04:32.421279 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48d73a84-835d-480a-92c3-3edf7ed142ea', 'scsi-SQEMU_QEMU_HARDDISK_48d73a84-835d-480a-92c3-3edf7ed142ea'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '48d73a84', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48d73a84-835d-480a-92c3-3edf7ed142ea-part16', 'scsi-SQEMU_QEMU_HARDDISK_48d73a84-835d-480a-92c3-3edf7ed142ea-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48d73a84-835d-480a-92c3-3edf7ed142ea-part14', 'scsi-SQEMU_QEMU_HARDDISK_48d73a84-835d-480a-92c3-3edf7ed142ea-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48d73a84-835d-480a-92c3-3edf7ed142ea-part15', 'scsi-SQEMU_QEMU_HARDDISK_48d73a84-835d-480a-92c3-3edf7ed142ea-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48d73a84-835d-480a-92c3-3edf7ed142ea-part1', 'scsi-SQEMU_QEMU_HARDDISK_48d73a84-835d-480a-92c3-3edf7ed142ea-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:04:32.594476 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:04:32.594568 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:04:32.594615 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:04:32.594629 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--1fd8de68--da37--5e01--9bf2--5a04fcdcd771-osd--block--1fd8de68--da37--5e01--9bf2--5a04fcdcd771', 'dm-uuid-LVM-Q7trkX6T9bQrenPM1EuezeEWG2QB7ffx0bNZRnQ3R81VwJTdPWktYtRAGSsXVFlp'], 'uuids': ['958c3d71-9b3b-484b-8cbf-f174ba1f6fac'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '47760649', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['0bNZRn-Q3R8-1VwJ-TdPW-ktYt-RAGS-sXVFlp']}}, 'ansible_loop_var': 'item'})  2026-03-26 05:04:32.594643 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ZJyEql-gfRL-jtJK-N8Jw-RgDI-1ZPO-4R3wgt', 'dm-uuid-CRYPT-LUKS2-1d39f6c51f6c463099cda410ca5e45d8-ZJyEql-gfRL-jtJK-N8Jw-RgDI-1ZPO-4R3wgt'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:04:32.594668 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8ddd7966-84e6-4951-8a08-7b4fb4af2bd2', 'scsi-SQEMU_QEMU_HARDDISK_8ddd7966-84e6-4951-8a08-7b4fb4af2bd2'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '8ddd7966', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:04:32.594682 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-FriUOI-gUEr-kmP0-nYC7-MoO0-ng3W-Ej90o7', 'scsi-0QEMU_QEMU_HARDDISK_943c088c-5b56-4173-ab64-ec81e1cc816d', 'scsi-SQEMU_QEMU_HARDDISK_943c088c-5b56-4173-ab64-ec81e1cc816d'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '943c088c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--83c4def8--4703--5f7c--9549--7666ff9f2b66-osd--block--83c4def8--4703--5f7c--9549--7666ff9f2b66']}}, 'ansible_loop_var': 'item'})  2026-03-26 05:04:32.594706 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:04:32.594717 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:04:32.594727 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:04:32.594740 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-26-01-38-15-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:04:32.594751 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:04:32.594768 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-A6zrFI-A863-mhHt-Ip5p-UFeD-Hxho-mhuceD', 'dm-uuid-CRYPT-LUKS2-4b88786507c84424981e8c33baf61cbe-A6zrFI-A863-mhHt-Ip5p-UFeD-Hxho-mhuceD'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:04:32.627868 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:04:32.627973 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--83c4def8--4703--5f7c--9549--7666ff9f2b66-osd--block--83c4def8--4703--5f7c--9549--7666ff9f2b66', 'dm-uuid-LVM-DoNgv1c108dy4eu1pvS7TOCWbuA3UXv0A6zrFIA863mhHtIp5pUFeDHxhomhuceD'], 'uuids': ['4b887865-07c8-4424-981e-8c33baf61cbe'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '943c088c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['A6zrFI-A863-mhHt-Ip5p-UFeD-Hxho-mhuceD']}}, 'ansible_loop_var': 'item'})  2026-03-26 05:04:32.627991 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:04:32.628005 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-xgZSV6-0wfE-zGZo-XmXe-xuiN-RWM0-U4VPgB', 'scsi-0QEMU_QEMU_HARDDISK_47760649-09e9-4ed8-8303-e5ee473a8102', 'scsi-SQEMU_QEMU_HARDDISK_47760649-09e9-4ed8-8303-e5ee473a8102'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '47760649', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--1fd8de68--da37--5e01--9bf2--5a04fcdcd771-osd--block--1fd8de68--da37--5e01--9bf2--5a04fcdcd771']}}, 'ansible_loop_var': 'item'})  2026-03-26 05:04:32.628021 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:04:32.628049 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:04:32.628087 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:04:32.628100 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-26-01-38-40-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1060', 'sectorsize': '2048', 'size': '530.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:04:32.628113 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:04:32.628135 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4fa924fa-33d9-43ce-b208-159d6f6ab539', 'scsi-SQEMU_QEMU_HARDDISK_4fa924fa-33d9-43ce-b208-159d6f6ab539'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '4fa924fa', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4fa924fa-33d9-43ce-b208-159d6f6ab539-part16', 'scsi-SQEMU_QEMU_HARDDISK_4fa924fa-33d9-43ce-b208-159d6f6ab539-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4fa924fa-33d9-43ce-b208-159d6f6ab539-part14', 'scsi-SQEMU_QEMU_HARDDISK_4fa924fa-33d9-43ce-b208-159d6f6ab539-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4fa924fa-33d9-43ce-b208-159d6f6ab539-part15', 'scsi-SQEMU_QEMU_HARDDISK_4fa924fa-33d9-43ce-b208-159d6f6ab539-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4fa924fa-33d9-43ce-b208-159d6f6ab539-part1', 'scsi-SQEMU_QEMU_HARDDISK_4fa924fa-33d9-43ce-b208-159d6f6ab539-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:04:40.901205 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:04:40.901336 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:04:40.901347 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:04:40.901370 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2ece1d7d-b762-44e0-80cf-0ec8d4e65a06', 'scsi-SQEMU_QEMU_HARDDISK_2ece1d7d-b762-44e0-80cf-0ec8d4e65a06'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2ece1d7d', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2ece1d7d-b762-44e0-80cf-0ec8d4e65a06-part16', 'scsi-SQEMU_QEMU_HARDDISK_2ece1d7d-b762-44e0-80cf-0ec8d4e65a06-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2ece1d7d-b762-44e0-80cf-0ec8d4e65a06-part14', 'scsi-SQEMU_QEMU_HARDDISK_2ece1d7d-b762-44e0-80cf-0ec8d4e65a06-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2ece1d7d-b762-44e0-80cf-0ec8d4e65a06-part15', 'scsi-SQEMU_QEMU_HARDDISK_2ece1d7d-b762-44e0-80cf-0ec8d4e65a06-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2ece1d7d-b762-44e0-80cf-0ec8d4e65a06-part1', 'scsi-SQEMU_QEMU_HARDDISK_2ece1d7d-b762-44e0-80cf-0ec8d4e65a06-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:04:40.901401 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:04:40.901409 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:04:40.901416 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:04:40.901423 | orchestrator | skipping: [testbed-manager] 2026-03-26 05:04:40.901433 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-0bNZRn-Q3R8-1VwJ-TdPW-ktYt-RAGS-sXVFlp', 'dm-uuid-CRYPT-LUKS2-958c3d719b3b484b8cbff174ba1f6fac-0bNZRn-Q3R8-1VwJ-TdPW-ktYt-RAGS-sXVFlp'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:04:40.901440 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:04:40.901447 | orchestrator | 2026-03-26 05:04:40.901454 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-26 05:04:40.901467 | orchestrator | Thursday 26 March 2026 05:04:33 +0000 (0:00:02.390) 0:01:57.474 ******** 2026-03-26 05:04:40.901473 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:04:40.901481 | orchestrator | ok: [testbed-node-1] 2026-03-26 05:04:40.901487 | orchestrator | ok: [testbed-node-2] 2026-03-26 05:04:40.901493 | orchestrator | ok: [testbed-node-3] 2026-03-26 05:04:40.901499 | orchestrator | ok: [testbed-node-4] 2026-03-26 05:04:40.901506 | orchestrator | ok: [testbed-node-5] 2026-03-26 05:04:40.901512 | orchestrator | ok: [testbed-manager] 2026-03-26 05:04:40.901518 | orchestrator | 2026-03-26 05:04:40.901525 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-26 05:04:40.901531 | orchestrator | Thursday 26 March 2026 05:04:36 +0000 (0:00:02.554) 0:02:00.029 ******** 2026-03-26 05:04:40.901537 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:04:40.901543 | orchestrator | ok: [testbed-node-1] 2026-03-26 05:04:40.901550 | orchestrator | ok: [testbed-node-2] 2026-03-26 05:04:40.901556 | orchestrator | ok: [testbed-node-3] 2026-03-26 05:04:40.901562 | orchestrator | ok: [testbed-node-4] 2026-03-26 05:04:40.901568 | orchestrator | ok: [testbed-node-5] 2026-03-26 05:04:40.901574 | orchestrator | ok: [testbed-manager] 2026-03-26 05:04:40.901580 | orchestrator | 2026-03-26 05:04:40.901587 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-26 05:04:40.901593 | orchestrator | Thursday 26 March 2026 05:04:38 +0000 (0:00:02.027) 0:02:02.056 ******** 2026-03-26 05:04:40.901599 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:04:40.901605 | orchestrator | ok: [testbed-node-1] 2026-03-26 05:04:40.901612 | orchestrator | ok: [testbed-node-2] 2026-03-26 05:04:40.901618 | orchestrator | ok: [testbed-node-3] 2026-03-26 05:04:40.901624 | orchestrator | skipping: [testbed-manager] 2026-03-26 05:04:40.901630 | orchestrator | ok: [testbed-node-4] 2026-03-26 05:04:40.901636 | orchestrator | ok: [testbed-node-5] 2026-03-26 05:04:40.901642 | orchestrator | 2026-03-26 05:04:40.901649 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-26 05:04:40.901659 | orchestrator | Thursday 26 March 2026 05:04:40 +0000 (0:00:02.483) 0:02:04.540 ******** 2026-03-26 05:05:11.708808 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:05:11.708895 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:05:11.708902 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:05:11.708908 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:05:11.708913 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:05:11.708919 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:05:11.708924 | orchestrator | skipping: [testbed-manager] 2026-03-26 05:05:11.708930 | orchestrator | 2026-03-26 05:05:11.708936 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-26 05:05:11.708942 | orchestrator | Thursday 26 March 2026 05:04:42 +0000 (0:00:01.950) 0:02:06.491 ******** 2026-03-26 05:05:11.708948 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:05:11.708953 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:05:11.708958 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:05:11.708963 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:05:11.708968 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:05:11.708973 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:05:11.708978 | orchestrator | ok: [testbed-manager -> testbed-node-2(192.168.16.12)] 2026-03-26 05:05:11.708984 | orchestrator | 2026-03-26 05:05:11.709025 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-26 05:05:11.709031 | orchestrator | Thursday 26 March 2026 05:04:45 +0000 (0:00:02.589) 0:02:09.080 ******** 2026-03-26 05:05:11.709036 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:05:11.709041 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:05:11.709046 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:05:11.709051 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:05:11.709057 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:05:11.709062 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:05:11.709083 | orchestrator | skipping: [testbed-manager] 2026-03-26 05:05:11.709088 | orchestrator | 2026-03-26 05:05:11.709093 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-26 05:05:11.709098 | orchestrator | Thursday 26 March 2026 05:04:47 +0000 (0:00:01.980) 0:02:11.061 ******** 2026-03-26 05:05:11.709104 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-26 05:05:11.709109 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-03-26 05:05:11.709115 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-26 05:05:11.709120 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-03-26 05:05:11.709125 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-26 05:05:11.709130 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-26 05:05:11.709134 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-03-26 05:05:11.709139 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-03-26 05:05:11.709144 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-03-26 05:05:11.709149 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-03-26 05:05:11.709154 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-03-26 05:05:11.709159 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-03-26 05:05:11.709164 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-26 05:05:11.709169 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-03-26 05:05:11.709175 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-03-26 05:05:11.709179 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-03-26 05:05:11.709184 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-03-26 05:05:11.709189 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-03-26 05:05:11.709194 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-03-26 05:05:11.709199 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-03-26 05:05:11.709204 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-03-26 05:05:11.709209 | orchestrator | 2026-03-26 05:05:11.709214 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-26 05:05:11.709219 | orchestrator | Thursday 26 March 2026 05:04:51 +0000 (0:00:04.227) 0:02:15.289 ******** 2026-03-26 05:05:11.709224 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-26 05:05:11.709230 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-26 05:05:11.709235 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-26 05:05:11.709240 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:05:11.709244 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-26 05:05:11.709249 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-26 05:05:11.709254 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-26 05:05:11.709259 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:05:11.709264 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-26 05:05:11.709269 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-26 05:05:11.709274 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-26 05:05:11.709318 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:05:11.709329 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-26 05:05:11.709337 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-26 05:05:11.709345 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-26 05:05:11.709354 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:05:11.709359 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-26 05:05:11.709364 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-26 05:05:11.709369 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-26 05:05:11.709376 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:05:11.709390 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-26 05:05:11.709396 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-26 05:05:11.709402 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-26 05:05:11.709409 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:05:11.709426 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-03-26 05:05:11.709432 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-03-26 05:05:11.709438 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-03-26 05:05:11.709444 | orchestrator | skipping: [testbed-manager] 2026-03-26 05:05:11.709450 | orchestrator | 2026-03-26 05:05:11.709460 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-26 05:05:11.709466 | orchestrator | Thursday 26 March 2026 05:04:53 +0000 (0:00:02.297) 0:02:17.586 ******** 2026-03-26 05:05:11.709472 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:05:11.709478 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:05:11.709484 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:05:11.709490 | orchestrator | skipping: [testbed-manager] 2026-03-26 05:05:11.709496 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-26 05:05:11.709502 | orchestrator | 2026-03-26 05:05:11.709509 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-26 05:05:11.709516 | orchestrator | Thursday 26 March 2026 05:04:55 +0000 (0:00:02.018) 0:02:19.605 ******** 2026-03-26 05:05:11.709522 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:05:11.709528 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:05:11.709534 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:05:11.709540 | orchestrator | 2026-03-26 05:05:11.709545 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-26 05:05:11.709551 | orchestrator | Thursday 26 March 2026 05:04:57 +0000 (0:00:01.688) 0:02:21.294 ******** 2026-03-26 05:05:11.709557 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:05:11.709563 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:05:11.709569 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:05:11.709574 | orchestrator | 2026-03-26 05:05:11.709580 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-26 05:05:11.709586 | orchestrator | Thursday 26 March 2026 05:04:59 +0000 (0:00:01.410) 0:02:22.704 ******** 2026-03-26 05:05:11.709592 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:05:11.709598 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:05:11.709603 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:05:11.709610 | orchestrator | 2026-03-26 05:05:11.709616 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-26 05:05:11.709621 | orchestrator | Thursday 26 March 2026 05:05:00 +0000 (0:00:01.340) 0:02:24.045 ******** 2026-03-26 05:05:11.709627 | orchestrator | ok: [testbed-node-3] 2026-03-26 05:05:11.709633 | orchestrator | ok: [testbed-node-4] 2026-03-26 05:05:11.709639 | orchestrator | ok: [testbed-node-5] 2026-03-26 05:05:11.709645 | orchestrator | 2026-03-26 05:05:11.709650 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-26 05:05:11.709656 | orchestrator | Thursday 26 March 2026 05:05:01 +0000 (0:00:01.414) 0:02:25.460 ******** 2026-03-26 05:05:11.709662 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-26 05:05:11.709668 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-26 05:05:11.709673 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-26 05:05:11.709679 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:05:11.709685 | orchestrator | 2026-03-26 05:05:11.709691 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-26 05:05:11.709697 | orchestrator | Thursday 26 March 2026 05:05:03 +0000 (0:00:01.677) 0:02:27.138 ******** 2026-03-26 05:05:11.709702 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-26 05:05:11.709712 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-26 05:05:11.709718 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-26 05:05:11.709724 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:05:11.709730 | orchestrator | 2026-03-26 05:05:11.709736 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-26 05:05:11.709742 | orchestrator | Thursday 26 March 2026 05:05:05 +0000 (0:00:01.718) 0:02:28.856 ******** 2026-03-26 05:05:11.709748 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-26 05:05:11.709754 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-26 05:05:11.709759 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-26 05:05:11.709764 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:05:11.709769 | orchestrator | 2026-03-26 05:05:11.709774 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-26 05:05:11.709779 | orchestrator | Thursday 26 March 2026 05:05:06 +0000 (0:00:01.586) 0:02:30.443 ******** 2026-03-26 05:05:11.709784 | orchestrator | ok: [testbed-node-3] 2026-03-26 05:05:11.709789 | orchestrator | ok: [testbed-node-4] 2026-03-26 05:05:11.709794 | orchestrator | ok: [testbed-node-5] 2026-03-26 05:05:11.709799 | orchestrator | 2026-03-26 05:05:11.709804 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-26 05:05:11.709809 | orchestrator | Thursday 26 March 2026 05:05:08 +0000 (0:00:01.430) 0:02:31.874 ******** 2026-03-26 05:05:11.709814 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-26 05:05:11.709851 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-26 05:05:11.709857 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-26 05:05:11.709862 | orchestrator | 2026-03-26 05:05:11.709867 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-26 05:05:11.709872 | orchestrator | Thursday 26 March 2026 05:05:09 +0000 (0:00:01.498) 0:02:33.372 ******** 2026-03-26 05:05:11.709877 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-26 05:05:11.709882 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-26 05:05:11.709888 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-26 05:05:11.709893 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-26 05:05:11.709902 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-26 05:05:58.091330 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-26 05:05:58.091464 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-26 05:05:58.091479 | orchestrator | 2026-03-26 05:05:58.091505 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-26 05:05:58.091517 | orchestrator | Thursday 26 March 2026 05:05:11 +0000 (0:00:01.970) 0:02:35.343 ******** 2026-03-26 05:05:58.091529 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-26 05:05:58.091540 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-26 05:05:58.091550 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-26 05:05:58.091560 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-26 05:05:58.091570 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-26 05:05:58.091579 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-26 05:05:58.091589 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-26 05:05:58.091599 | orchestrator | 2026-03-26 05:05:58.091609 | orchestrator | TASK [ceph-infra : Update cache for Debian based OSs] ************************** 2026-03-26 05:05:58.091619 | orchestrator | Thursday 26 March 2026 05:05:14 +0000 (0:00:03.036) 0:02:38.380 ******** 2026-03-26 05:05:58.091649 | orchestrator | changed: [testbed-node-3] 2026-03-26 05:05:58.091661 | orchestrator | changed: [testbed-node-1] 2026-03-26 05:05:58.091671 | orchestrator | changed: [testbed-node-4] 2026-03-26 05:05:58.091680 | orchestrator | changed: [testbed-node-5] 2026-03-26 05:05:58.091690 | orchestrator | changed: [testbed-node-2] 2026-03-26 05:05:58.091699 | orchestrator | changed: [testbed-manager] 2026-03-26 05:05:58.091709 | orchestrator | changed: [testbed-node-0] 2026-03-26 05:05:58.091718 | orchestrator | 2026-03-26 05:05:58.091728 | orchestrator | TASK [ceph-infra : Include_tasks configure_firewall.yml] *********************** 2026-03-26 05:05:58.091738 | orchestrator | Thursday 26 March 2026 05:05:22 +0000 (0:00:08.251) 0:02:46.631 ******** 2026-03-26 05:05:58.091747 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:05:58.091757 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:05:58.091766 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:05:58.091775 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:05:58.091785 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:05:58.091794 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:05:58.091804 | orchestrator | skipping: [testbed-manager] 2026-03-26 05:05:58.091813 | orchestrator | 2026-03-26 05:05:58.091823 | orchestrator | TASK [ceph-infra : Include_tasks setup_ntp.yml] ******************************** 2026-03-26 05:05:58.091832 | orchestrator | Thursday 26 March 2026 05:05:25 +0000 (0:00:02.179) 0:02:48.811 ******** 2026-03-26 05:05:58.091842 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:05:58.091851 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:05:58.091861 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:05:58.091871 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:05:58.091880 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:05:58.091890 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:05:58.091899 | orchestrator | skipping: [testbed-manager] 2026-03-26 05:05:58.091909 | orchestrator | 2026-03-26 05:05:58.091918 | orchestrator | TASK [ceph-infra : Add logrotate configuration] ******************************** 2026-03-26 05:05:58.091928 | orchestrator | Thursday 26 March 2026 05:05:27 +0000 (0:00:01.911) 0:02:50.723 ******** 2026-03-26 05:05:58.091937 | orchestrator | skipping: [testbed-manager] 2026-03-26 05:05:58.091947 | orchestrator | changed: [testbed-node-0] 2026-03-26 05:05:58.091956 | orchestrator | changed: [testbed-node-1] 2026-03-26 05:05:58.091965 | orchestrator | changed: [testbed-node-2] 2026-03-26 05:05:58.091975 | orchestrator | changed: [testbed-node-3] 2026-03-26 05:05:58.091984 | orchestrator | changed: [testbed-node-4] 2026-03-26 05:05:58.091993 | orchestrator | changed: [testbed-node-5] 2026-03-26 05:05:58.092003 | orchestrator | 2026-03-26 05:05:58.092012 | orchestrator | TASK [ceph-validate : Include check_system.yml] ******************************** 2026-03-26 05:05:58.092022 | orchestrator | Thursday 26 March 2026 05:05:30 +0000 (0:00:03.105) 0:02:53.828 ******** 2026-03-26 05:05:58.092033 | orchestrator | included: /ansible/roles/ceph-validate/tasks/check_system.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2026-03-26 05:05:58.092044 | orchestrator | 2026-03-26 05:05:58.092053 | orchestrator | TASK [ceph-validate : Fail on unsupported ansible version (1.X)] *************** 2026-03-26 05:05:58.092063 | orchestrator | Thursday 26 March 2026 05:05:33 +0000 (0:00:02.913) 0:02:56.741 ******** 2026-03-26 05:05:58.092072 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:05:58.092082 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:05:58.092091 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:05:58.092101 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:05:58.092110 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:05:58.092119 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:05:58.092129 | orchestrator | skipping: [testbed-manager] 2026-03-26 05:05:58.092138 | orchestrator | 2026-03-26 05:05:58.092148 | orchestrator | TASK [ceph-validate : Fail on unsupported system] ****************************** 2026-03-26 05:05:58.092157 | orchestrator | Thursday 26 March 2026 05:05:34 +0000 (0:00:01.808) 0:02:58.550 ******** 2026-03-26 05:05:58.092175 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:05:58.092184 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:05:58.092194 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:05:58.092203 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:05:58.092213 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:05:58.092222 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:05:58.092232 | orchestrator | skipping: [testbed-manager] 2026-03-26 05:05:58.092241 | orchestrator | 2026-03-26 05:05:58.092251 | orchestrator | TASK [ceph-validate : Fail on unsupported architecture] ************************ 2026-03-26 05:05:58.092261 | orchestrator | Thursday 26 March 2026 05:05:37 +0000 (0:00:02.111) 0:03:00.662 ******** 2026-03-26 05:05:58.092270 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:05:58.092296 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:05:58.092307 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:05:58.092316 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:05:58.092326 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:05:58.092336 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:05:58.092346 | orchestrator | skipping: [testbed-manager] 2026-03-26 05:05:58.092356 | orchestrator | 2026-03-26 05:05:58.092371 | orchestrator | TASK [ceph-validate : Fail on unsupported distribution] ************************ 2026-03-26 05:05:58.092414 | orchestrator | Thursday 26 March 2026 05:05:39 +0000 (0:00:02.036) 0:03:02.698 ******** 2026-03-26 05:05:58.092424 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:05:58.092434 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:05:58.092443 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:05:58.092453 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:05:58.092462 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:05:58.092472 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:05:58.092481 | orchestrator | skipping: [testbed-manager] 2026-03-26 05:05:58.092491 | orchestrator | 2026-03-26 05:05:58.092500 | orchestrator | TASK [ceph-validate : Fail on unsupported CentOS release] ********************** 2026-03-26 05:05:58.092510 | orchestrator | Thursday 26 March 2026 05:05:41 +0000 (0:00:02.169) 0:03:04.867 ******** 2026-03-26 05:05:58.092519 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:05:58.092529 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:05:58.092538 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:05:58.092548 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:05:58.092557 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:05:58.092566 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:05:58.092576 | orchestrator | skipping: [testbed-manager] 2026-03-26 05:05:58.092585 | orchestrator | 2026-03-26 05:05:58.092595 | orchestrator | TASK [ceph-validate : Fail on unsupported distribution for ubuntu cloud archive] *** 2026-03-26 05:05:58.092604 | orchestrator | Thursday 26 March 2026 05:05:43 +0000 (0:00:01.955) 0:03:06.823 ******** 2026-03-26 05:05:58.092614 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:05:58.092623 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:05:58.092632 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:05:58.092642 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:05:58.092651 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:05:58.092661 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:05:58.092670 | orchestrator | skipping: [testbed-manager] 2026-03-26 05:05:58.092682 | orchestrator | 2026-03-26 05:05:58.092699 | orchestrator | TASK [ceph-validate : Fail on unsupported SUSE/openSUSE distribution (only 15.x supported)] *** 2026-03-26 05:05:58.092715 | orchestrator | Thursday 26 March 2026 05:05:45 +0000 (0:00:02.173) 0:03:08.997 ******** 2026-03-26 05:05:58.092731 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:05:58.092747 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:05:58.092762 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:05:58.092778 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:05:58.092795 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:05:58.092810 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:05:58.092826 | orchestrator | skipping: [testbed-manager] 2026-03-26 05:05:58.092854 | orchestrator | 2026-03-26 05:05:58.092870 | orchestrator | TASK [ceph-validate : Fail if systemd is not present] ************************** 2026-03-26 05:05:58.092888 | orchestrator | Thursday 26 March 2026 05:05:47 +0000 (0:00:01.947) 0:03:10.944 ******** 2026-03-26 05:05:58.092906 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:05:58.092923 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:05:58.092937 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:05:58.092947 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:05:58.092956 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:05:58.092966 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:05:58.092975 | orchestrator | skipping: [testbed-manager] 2026-03-26 05:05:58.092984 | orchestrator | 2026-03-26 05:05:58.092994 | orchestrator | TASK [ceph-validate : Validate repository variables in non-containerized scenario] *** 2026-03-26 05:05:58.093004 | orchestrator | Thursday 26 March 2026 05:05:49 +0000 (0:00:02.267) 0:03:13.212 ******** 2026-03-26 05:05:58.093013 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:05:58.093022 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:05:58.093032 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:05:58.093041 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:05:58.093050 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:05:58.093060 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:05:58.093069 | orchestrator | skipping: [testbed-manager] 2026-03-26 05:05:58.093078 | orchestrator | 2026-03-26 05:05:58.093088 | orchestrator | TASK [ceph-validate : Validate osd_objectstore] ******************************** 2026-03-26 05:05:58.093099 | orchestrator | Thursday 26 March 2026 05:05:51 +0000 (0:00:02.082) 0:03:15.295 ******** 2026-03-26 05:05:58.093109 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:05:58.093120 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:05:58.093130 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:05:58.093141 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:05:58.093151 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:05:58.093162 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:05:58.093172 | orchestrator | skipping: [testbed-manager] 2026-03-26 05:05:58.093183 | orchestrator | 2026-03-26 05:05:58.093194 | orchestrator | TASK [ceph-validate : Validate radosgw network configuration] ****************** 2026-03-26 05:05:58.093204 | orchestrator | Thursday 26 March 2026 05:05:53 +0000 (0:00:02.175) 0:03:17.470 ******** 2026-03-26 05:05:58.093215 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:05:58.093225 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:05:58.093236 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:05:58.093246 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:05:58.093257 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:05:58.093267 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:05:58.093278 | orchestrator | skipping: [testbed-manager] 2026-03-26 05:05:58.093289 | orchestrator | 2026-03-26 05:05:58.093299 | orchestrator | TASK [ceph-validate : Validate lvm osd scenario] ******************************* 2026-03-26 05:05:58.093310 | orchestrator | Thursday 26 March 2026 05:05:56 +0000 (0:00:02.326) 0:03:19.796 ******** 2026-03-26 05:05:58.093321 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:05:58.093332 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:05:58.093342 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:05:58.093353 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:05:58.093364 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:05:58.093416 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:05:58.093429 | orchestrator | skipping: [testbed-manager] 2026-03-26 05:05:58.093440 | orchestrator | 2026-03-26 05:05:58.093462 | orchestrator | TASK [ceph-validate : Validate bluestore lvm osd scenario] ********************* 2026-03-26 05:06:20.089484 | orchestrator | Thursday 26 March 2026 05:05:58 +0000 (0:00:01.940) 0:03:21.737 ******** 2026-03-26 05:06:20.089635 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:06:20.089656 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:06:20.089686 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:06:20.089725 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-93e8c9a2-b6ff-5fe0-a79e-2922336c3e0a', 'data_vg': 'ceph-93e8c9a2-b6ff-5fe0-a79e-2922336c3e0a'})  2026-03-26 05:06:20.089739 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e2623153-bc41-510f-8884-ef957bb96082', 'data_vg': 'ceph-e2623153-bc41-510f-8884-ef957bb96082'})  2026-03-26 05:06:20.089750 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:06:20.089761 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a652979e-9f40-503a-bbc8-6de5e605991e', 'data_vg': 'ceph-a652979e-9f40-503a-bbc8-6de5e605991e'})  2026-03-26 05:06:20.089772 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b5eee7c3-8883-5bbe-be5a-75726e822543', 'data_vg': 'ceph-b5eee7c3-8883-5bbe-be5a-75726e822543'})  2026-03-26 05:06:20.089784 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:06:20.089803 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-83c4def8-4703-5f7c-9549-7666ff9f2b66', 'data_vg': 'ceph-83c4def8-4703-5f7c-9549-7666ff9f2b66'})  2026-03-26 05:06:20.089814 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1fd8de68-da37-5e01-9bf2-5a04fcdcd771', 'data_vg': 'ceph-1fd8de68-da37-5e01-9bf2-5a04fcdcd771'})  2026-03-26 05:06:20.089825 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:06:20.089836 | orchestrator | skipping: [testbed-manager] 2026-03-26 05:06:20.089847 | orchestrator | 2026-03-26 05:06:20.089859 | orchestrator | TASK [ceph-validate : Fail if local scenario is enabled on debian] ************* 2026-03-26 05:06:20.089870 | orchestrator | Thursday 26 March 2026 05:06:00 +0000 (0:00:02.288) 0:03:24.025 ******** 2026-03-26 05:06:20.089880 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:06:20.089891 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:06:20.089901 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:06:20.089912 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:06:20.089922 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:06:20.089933 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:06:20.089944 | orchestrator | skipping: [testbed-manager] 2026-03-26 05:06:20.089954 | orchestrator | 2026-03-26 05:06:20.089966 | orchestrator | TASK [ceph-validate : Fail if rhcs repository is enabled on debian] ************ 2026-03-26 05:06:20.089979 | orchestrator | Thursday 26 March 2026 05:06:02 +0000 (0:00:01.980) 0:03:26.005 ******** 2026-03-26 05:06:20.089992 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:06:20.090005 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:06:20.090092 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:06:20.090120 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:06:20.090139 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:06:20.090157 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:06:20.090176 | orchestrator | skipping: [testbed-manager] 2026-03-26 05:06:20.090195 | orchestrator | 2026-03-26 05:06:20.090212 | orchestrator | TASK [ceph-validate : Check ceph_origin definition on SUSE/openSUSE Leap] ****** 2026-03-26 05:06:20.090230 | orchestrator | Thursday 26 March 2026 05:06:04 +0000 (0:00:02.178) 0:03:28.184 ******** 2026-03-26 05:06:20.090251 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:06:20.090270 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:06:20.090289 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:06:20.090308 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:06:20.090329 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:06:20.090349 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:06:20.090368 | orchestrator | skipping: [testbed-manager] 2026-03-26 05:06:20.090380 | orchestrator | 2026-03-26 05:06:20.090391 | orchestrator | TASK [ceph-validate : Check ceph_repository definition on SUSE/openSUSE Leap] *** 2026-03-26 05:06:20.090402 | orchestrator | Thursday 26 March 2026 05:06:06 +0000 (0:00:01.996) 0:03:30.181 ******** 2026-03-26 05:06:20.090412 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:06:20.090460 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:06:20.090480 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:06:20.090508 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:06:20.090545 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:06:20.090563 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:06:20.090581 | orchestrator | skipping: [testbed-manager] 2026-03-26 05:06:20.090598 | orchestrator | 2026-03-26 05:06:20.090616 | orchestrator | TASK [ceph-validate : Validate ntp daemon type] ******************************** 2026-03-26 05:06:20.090635 | orchestrator | Thursday 26 March 2026 05:06:08 +0000 (0:00:02.413) 0:03:32.594 ******** 2026-03-26 05:06:20.090654 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:06:20.090672 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:06:20.090690 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:06:20.090708 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:06:20.090726 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:06:20.090745 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:06:20.090765 | orchestrator | skipping: [testbed-manager] 2026-03-26 05:06:20.090783 | orchestrator | 2026-03-26 05:06:20.090801 | orchestrator | TASK [ceph-validate : Abort if ntp_daemon_type is ntpd on Atomic] ************** 2026-03-26 05:06:20.090819 | orchestrator | Thursday 26 March 2026 05:06:10 +0000 (0:00:02.030) 0:03:34.625 ******** 2026-03-26 05:06:20.090838 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:06:20.090856 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:06:20.090875 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:06:20.090894 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:06:20.090911 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:06:20.090930 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:06:20.090948 | orchestrator | skipping: [testbed-manager] 2026-03-26 05:06:20.090967 | orchestrator | 2026-03-26 05:06:20.090986 | orchestrator | TASK [ceph-validate : Include check_devices.yml] ******************************* 2026-03-26 05:06:20.091005 | orchestrator | Thursday 26 March 2026 05:06:13 +0000 (0:00:02.077) 0:03:36.703 ******** 2026-03-26 05:06:20.091048 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:06:20.091068 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:06:20.091088 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:06:20.091106 | orchestrator | skipping: [testbed-manager] 2026-03-26 05:06:20.091137 | orchestrator | included: /ansible/roles/ceph-validate/tasks/check_devices.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-26 05:06:20.091157 | orchestrator | 2026-03-26 05:06:20.091175 | orchestrator | TASK [ceph-validate : Set_fact root_device] ************************************ 2026-03-26 05:06:20.091195 | orchestrator | Thursday 26 March 2026 05:06:15 +0000 (0:00:02.515) 0:03:39.218 ******** 2026-03-26 05:06:20.091213 | orchestrator | ok: [testbed-node-3] 2026-03-26 05:06:20.091232 | orchestrator | ok: [testbed-node-4] 2026-03-26 05:06:20.091251 | orchestrator | ok: [testbed-node-5] 2026-03-26 05:06:20.091270 | orchestrator | 2026-03-26 05:06:20.091289 | orchestrator | TASK [ceph-validate : Resolve devices in lvm_volumes] ************************** 2026-03-26 05:06:20.091307 | orchestrator | Thursday 26 March 2026 05:06:17 +0000 (0:00:01.463) 0:03:40.681 ******** 2026-03-26 05:06:20.091326 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-93e8c9a2-b6ff-5fe0-a79e-2922336c3e0a', 'data_vg': 'ceph-93e8c9a2-b6ff-5fe0-a79e-2922336c3e0a'})  2026-03-26 05:06:20.091345 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e2623153-bc41-510f-8884-ef957bb96082', 'data_vg': 'ceph-e2623153-bc41-510f-8884-ef957bb96082'})  2026-03-26 05:06:20.091363 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:06:20.091383 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a652979e-9f40-503a-bbc8-6de5e605991e', 'data_vg': 'ceph-a652979e-9f40-503a-bbc8-6de5e605991e'})  2026-03-26 05:06:20.091402 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b5eee7c3-8883-5bbe-be5a-75726e822543', 'data_vg': 'ceph-b5eee7c3-8883-5bbe-be5a-75726e822543'})  2026-03-26 05:06:20.091462 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:06:20.091482 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-83c4def8-4703-5f7c-9549-7666ff9f2b66', 'data_vg': 'ceph-83c4def8-4703-5f7c-9549-7666ff9f2b66'})  2026-03-26 05:06:20.091514 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1fd8de68-da37-5e01-9bf2-5a04fcdcd771', 'data_vg': 'ceph-1fd8de68-da37-5e01-9bf2-5a04fcdcd771'})  2026-03-26 05:06:20.091533 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:06:20.091552 | orchestrator | 2026-03-26 05:06:20.091570 | orchestrator | TASK [ceph-validate : Set_fact lvm_volumes_data_devices] *********************** 2026-03-26 05:06:20.091589 | orchestrator | Thursday 26 March 2026 05:06:18 +0000 (0:00:01.382) 0:03:42.064 ******** 2026-03-26 05:06:20.091612 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-93e8c9a2-b6ff-5fe0-a79e-2922336c3e0a', 'data_vg': 'ceph-93e8c9a2-b6ff-5fe0-a79e-2922336c3e0a'}, 'ansible_loop_var': 'item'})  2026-03-26 05:06:20.091634 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-e2623153-bc41-510f-8884-ef957bb96082', 'data_vg': 'ceph-e2623153-bc41-510f-8884-ef957bb96082'}, 'ansible_loop_var': 'item'})  2026-03-26 05:06:20.091652 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:06:20.091672 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-a652979e-9f40-503a-bbc8-6de5e605991e', 'data_vg': 'ceph-a652979e-9f40-503a-bbc8-6de5e605991e'}, 'ansible_loop_var': 'item'})  2026-03-26 05:06:20.091690 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-b5eee7c3-8883-5bbe-be5a-75726e822543', 'data_vg': 'ceph-b5eee7c3-8883-5bbe-be5a-75726e822543'}, 'ansible_loop_var': 'item'})  2026-03-26 05:06:20.091709 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:06:20.091725 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-83c4def8-4703-5f7c-9549-7666ff9f2b66', 'data_vg': 'ceph-83c4def8-4703-5f7c-9549-7666ff9f2b66'}, 'ansible_loop_var': 'item'})  2026-03-26 05:06:20.091742 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-1fd8de68-da37-5e01-9bf2-5a04fcdcd771', 'data_vg': 'ceph-1fd8de68-da37-5e01-9bf2-5a04fcdcd771'}, 'ansible_loop_var': 'item'})  2026-03-26 05:06:20.091761 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:06:20.091781 | orchestrator | 2026-03-26 05:06:20.091811 | orchestrator | TASK [ceph-validate : Fail if root_device is passed in lvm_volumes or devices] *** 2026-03-26 05:06:30.181417 | orchestrator | Thursday 26 March 2026 05:06:20 +0000 (0:00:01.660) 0:03:43.725 ******** 2026-03-26 05:06:30.181595 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:06:30.181655 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:06:30.181682 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:06:30.181699 | orchestrator | 2026-03-26 05:06:30.181715 | orchestrator | TASK [ceph-validate : Get devices information] ********************************* 2026-03-26 05:06:30.181730 | orchestrator | Thursday 26 March 2026 05:06:21 +0000 (0:00:01.449) 0:03:45.175 ******** 2026-03-26 05:06:30.181744 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:06:30.181759 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:06:30.181774 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:06:30.181789 | orchestrator | 2026-03-26 05:06:30.181804 | orchestrator | TASK [ceph-validate : Fail if one of the devices is not a device] ************** 2026-03-26 05:06:30.181819 | orchestrator | Thursday 26 March 2026 05:06:22 +0000 (0:00:01.356) 0:03:46.531 ******** 2026-03-26 05:06:30.181833 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:06:30.181876 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:06:30.181891 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:06:30.181906 | orchestrator | 2026-03-26 05:06:30.181920 | orchestrator | TASK [ceph-validate : Fail when gpt header found on osd devices] *************** 2026-03-26 05:06:30.181935 | orchestrator | Thursday 26 March 2026 05:06:24 +0000 (0:00:01.413) 0:03:47.944 ******** 2026-03-26 05:06:30.181951 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:06:30.181967 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:06:30.181982 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:06:30.181998 | orchestrator | 2026-03-26 05:06:30.182073 | orchestrator | TASK [ceph-validate : Check data logical volume] ******************************* 2026-03-26 05:06:30.182094 | orchestrator | Thursday 26 March 2026 05:06:25 +0000 (0:00:01.325) 0:03:49.270 ******** 2026-03-26 05:06:30.182109 | orchestrator | ok: [testbed-node-3] => (item={'data': 'osd-block-93e8c9a2-b6ff-5fe0-a79e-2922336c3e0a', 'data_vg': 'ceph-93e8c9a2-b6ff-5fe0-a79e-2922336c3e0a'}) 2026-03-26 05:06:30.182126 | orchestrator | ok: [testbed-node-4] => (item={'data': 'osd-block-a652979e-9f40-503a-bbc8-6de5e605991e', 'data_vg': 'ceph-a652979e-9f40-503a-bbc8-6de5e605991e'}) 2026-03-26 05:06:30.182142 | orchestrator | ok: [testbed-node-5] => (item={'data': 'osd-block-83c4def8-4703-5f7c-9549-7666ff9f2b66', 'data_vg': 'ceph-83c4def8-4703-5f7c-9549-7666ff9f2b66'}) 2026-03-26 05:06:30.182158 | orchestrator | ok: [testbed-node-4] => (item={'data': 'osd-block-b5eee7c3-8883-5bbe-be5a-75726e822543', 'data_vg': 'ceph-b5eee7c3-8883-5bbe-be5a-75726e822543'}) 2026-03-26 05:06:30.182174 | orchestrator | ok: [testbed-node-5] => (item={'data': 'osd-block-1fd8de68-da37-5e01-9bf2-5a04fcdcd771', 'data_vg': 'ceph-1fd8de68-da37-5e01-9bf2-5a04fcdcd771'}) 2026-03-26 05:06:30.182190 | orchestrator | ok: [testbed-node-3] => (item={'data': 'osd-block-e2623153-bc41-510f-8884-ef957bb96082', 'data_vg': 'ceph-e2623153-bc41-510f-8884-ef957bb96082'}) 2026-03-26 05:06:30.182205 | orchestrator | 2026-03-26 05:06:30.182220 | orchestrator | TASK [ceph-validate : Fail if one of the data logical volume is not a device or doesn't exist] *** 2026-03-26 05:06:30.182237 | orchestrator | Thursday 26 March 2026 05:06:28 +0000 (0:00:02.964) 0:03:52.234 ******** 2026-03-26 05:06:30.182260 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-93e8c9a2-b6ff-5fe0-a79e-2922336c3e0a/osd-block-93e8c9a2-b6ff-5fe0-a79e-2922336c3e0a', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 955, 'dev': 6, 'nlink': 1, 'atime': 1774493776.0892901, 'mtime': 1774493776.08229, 'ctime': 1774493776.08229, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64512, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-93e8c9a2-b6ff-5fe0-a79e-2922336c3e0a/osd-block-93e8c9a2-b6ff-5fe0-a79e-2922336c3e0a', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-93e8c9a2-b6ff-5fe0-a79e-2922336c3e0a', 'data_vg': 'ceph-93e8c9a2-b6ff-5fe0-a79e-2922336c3e0a'}, 'ansible_loop_var': 'item'})  2026-03-26 05:06:30.182318 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-e2623153-bc41-510f-8884-ef957bb96082/osd-block-e2623153-bc41-510f-8884-ef957bb96082', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 965, 'dev': 6, 'nlink': 1, 'atime': 1774493795.0405843, 'mtime': 1774493795.0355842, 'ctime': 1774493795.0355842, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64513, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-e2623153-bc41-510f-8884-ef957bb96082/osd-block-e2623153-bc41-510f-8884-ef957bb96082', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-e2623153-bc41-510f-8884-ef957bb96082', 'data_vg': 'ceph-e2623153-bc41-510f-8884-ef957bb96082'}, 'ansible_loop_var': 'item'})  2026-03-26 05:06:30.182351 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:06:30.182367 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-a652979e-9f40-503a-bbc8-6de5e605991e/osd-block-a652979e-9f40-503a-bbc8-6de5e605991e', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 957, 'dev': 6, 'nlink': 1, 'atime': 1774493778.800435, 'mtime': 1774493778.795435, 'ctime': 1774493778.795435, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64512, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-a652979e-9f40-503a-bbc8-6de5e605991e/osd-block-a652979e-9f40-503a-bbc8-6de5e605991e', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-a652979e-9f40-503a-bbc8-6de5e605991e', 'data_vg': 'ceph-a652979e-9f40-503a-bbc8-6de5e605991e'}, 'ansible_loop_var': 'item'})  2026-03-26 05:06:30.182383 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-b5eee7c3-8883-5bbe-be5a-75726e822543/osd-block-b5eee7c3-8883-5bbe-be5a-75726e822543', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 967, 'dev': 6, 'nlink': 1, 'atime': 1774493797.1287172, 'mtime': 1774493797.1227171, 'ctime': 1774493797.1227171, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64513, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-b5eee7c3-8883-5bbe-be5a-75726e822543/osd-block-b5eee7c3-8883-5bbe-be5a-75726e822543', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-b5eee7c3-8883-5bbe-be5a-75726e822543', 'data_vg': 'ceph-b5eee7c3-8883-5bbe-be5a-75726e822543'}, 'ansible_loop_var': 'item'})  2026-03-26 05:06:30.182397 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:06:30.182427 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-83c4def8-4703-5f7c-9549-7666ff9f2b66/osd-block-83c4def8-4703-5f7c-9549-7666ff9f2b66', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 949, 'dev': 6, 'nlink': 1, 'atime': 1774493778.807298, 'mtime': 1774493778.8022978, 'ctime': 1774493778.8022978, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64512, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-83c4def8-4703-5f7c-9549-7666ff9f2b66/osd-block-83c4def8-4703-5f7c-9549-7666ff9f2b66', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-83c4def8-4703-5f7c-9549-7666ff9f2b66', 'data_vg': 'ceph-83c4def8-4703-5f7c-9549-7666ff9f2b66'}, 'ansible_loop_var': 'item'})  2026-03-26 05:06:36.184936 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-1fd8de68-da37-5e01-9bf2-5a04fcdcd771/osd-block-1fd8de68-da37-5e01-9bf2-5a04fcdcd771', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 959, 'dev': 6, 'nlink': 1, 'atime': 1774493796.95358, 'mtime': 1774493796.9495797, 'ctime': 1774493796.9495797, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64513, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-1fd8de68-da37-5e01-9bf2-5a04fcdcd771/osd-block-1fd8de68-da37-5e01-9bf2-5a04fcdcd771', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-1fd8de68-da37-5e01-9bf2-5a04fcdcd771', 'data_vg': 'ceph-1fd8de68-da37-5e01-9bf2-5a04fcdcd771'}, 'ansible_loop_var': 'item'})  2026-03-26 05:06:36.185036 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:06:36.185051 | orchestrator | 2026-03-26 05:06:36.185061 | orchestrator | TASK [ceph-validate : Check bluestore db logical volume] *********************** 2026-03-26 05:06:36.185072 | orchestrator | Thursday 26 March 2026 05:06:30 +0000 (0:00:01.595) 0:03:53.830 ******** 2026-03-26 05:06:36.185081 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-93e8c9a2-b6ff-5fe0-a79e-2922336c3e0a', 'data_vg': 'ceph-93e8c9a2-b6ff-5fe0-a79e-2922336c3e0a'})  2026-03-26 05:06:36.185092 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e2623153-bc41-510f-8884-ef957bb96082', 'data_vg': 'ceph-e2623153-bc41-510f-8884-ef957bb96082'})  2026-03-26 05:06:36.185101 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:06:36.185110 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a652979e-9f40-503a-bbc8-6de5e605991e', 'data_vg': 'ceph-a652979e-9f40-503a-bbc8-6de5e605991e'})  2026-03-26 05:06:36.185119 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b5eee7c3-8883-5bbe-be5a-75726e822543', 'data_vg': 'ceph-b5eee7c3-8883-5bbe-be5a-75726e822543'})  2026-03-26 05:06:36.185127 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:06:36.185136 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-83c4def8-4703-5f7c-9549-7666ff9f2b66', 'data_vg': 'ceph-83c4def8-4703-5f7c-9549-7666ff9f2b66'})  2026-03-26 05:06:36.185145 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1fd8de68-da37-5e01-9bf2-5a04fcdcd771', 'data_vg': 'ceph-1fd8de68-da37-5e01-9bf2-5a04fcdcd771'})  2026-03-26 05:06:36.185153 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:06:36.185162 | orchestrator | 2026-03-26 05:06:36.185171 | orchestrator | TASK [ceph-validate : Fail if one of the bluestore db logical volume is not a device or doesn't exist] *** 2026-03-26 05:06:36.185181 | orchestrator | Thursday 26 March 2026 05:06:31 +0000 (0:00:01.451) 0:03:55.281 ******** 2026-03-26 05:06:36.185212 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-93e8c9a2-b6ff-5fe0-a79e-2922336c3e0a', 'data_vg': 'ceph-93e8c9a2-b6ff-5fe0-a79e-2922336c3e0a'}, 'ansible_loop_var': 'item'})  2026-03-26 05:06:36.185224 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-e2623153-bc41-510f-8884-ef957bb96082', 'data_vg': 'ceph-e2623153-bc41-510f-8884-ef957bb96082'}, 'ansible_loop_var': 'item'})  2026-03-26 05:06:36.185247 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:06:36.185256 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-a652979e-9f40-503a-bbc8-6de5e605991e', 'data_vg': 'ceph-a652979e-9f40-503a-bbc8-6de5e605991e'}, 'ansible_loop_var': 'item'})  2026-03-26 05:06:36.185279 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-b5eee7c3-8883-5bbe-be5a-75726e822543', 'data_vg': 'ceph-b5eee7c3-8883-5bbe-be5a-75726e822543'}, 'ansible_loop_var': 'item'})  2026-03-26 05:06:36.185288 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:06:36.185297 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-83c4def8-4703-5f7c-9549-7666ff9f2b66', 'data_vg': 'ceph-83c4def8-4703-5f7c-9549-7666ff9f2b66'}, 'ansible_loop_var': 'item'})  2026-03-26 05:06:36.185306 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-1fd8de68-da37-5e01-9bf2-5a04fcdcd771', 'data_vg': 'ceph-1fd8de68-da37-5e01-9bf2-5a04fcdcd771'}, 'ansible_loop_var': 'item'})  2026-03-26 05:06:36.185315 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:06:36.185324 | orchestrator | 2026-03-26 05:06:36.185332 | orchestrator | TASK [ceph-validate : Check bluestore wal logical volume] ********************** 2026-03-26 05:06:36.185341 | orchestrator | Thursday 26 March 2026 05:06:33 +0000 (0:00:01.497) 0:03:56.779 ******** 2026-03-26 05:06:36.185350 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-93e8c9a2-b6ff-5fe0-a79e-2922336c3e0a', 'data_vg': 'ceph-93e8c9a2-b6ff-5fe0-a79e-2922336c3e0a'})  2026-03-26 05:06:36.185358 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-e2623153-bc41-510f-8884-ef957bb96082', 'data_vg': 'ceph-e2623153-bc41-510f-8884-ef957bb96082'})  2026-03-26 05:06:36.185367 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:06:36.185375 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-a652979e-9f40-503a-bbc8-6de5e605991e', 'data_vg': 'ceph-a652979e-9f40-503a-bbc8-6de5e605991e'})  2026-03-26 05:06:36.185384 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-b5eee7c3-8883-5bbe-be5a-75726e822543', 'data_vg': 'ceph-b5eee7c3-8883-5bbe-be5a-75726e822543'})  2026-03-26 05:06:36.185392 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:06:36.185401 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-83c4def8-4703-5f7c-9549-7666ff9f2b66', 'data_vg': 'ceph-83c4def8-4703-5f7c-9549-7666ff9f2b66'})  2026-03-26 05:06:36.185409 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1fd8de68-da37-5e01-9bf2-5a04fcdcd771', 'data_vg': 'ceph-1fd8de68-da37-5e01-9bf2-5a04fcdcd771'})  2026-03-26 05:06:36.185418 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:06:36.185426 | orchestrator | 2026-03-26 05:06:36.185435 | orchestrator | TASK [ceph-validate : Fail if one of the bluestore wal logical volume is not a device or doesn't exist] *** 2026-03-26 05:06:36.185483 | orchestrator | Thursday 26 March 2026 05:06:34 +0000 (0:00:01.663) 0:03:58.443 ******** 2026-03-26 05:06:36.185496 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-93e8c9a2-b6ff-5fe0-a79e-2922336c3e0a', 'data_vg': 'ceph-93e8c9a2-b6ff-5fe0-a79e-2922336c3e0a'}, 'ansible_loop_var': 'item'})  2026-03-26 05:06:36.185506 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-e2623153-bc41-510f-8884-ef957bb96082', 'data_vg': 'ceph-e2623153-bc41-510f-8884-ef957bb96082'}, 'ansible_loop_var': 'item'})  2026-03-26 05:06:36.185516 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:06:36.185526 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-a652979e-9f40-503a-bbc8-6de5e605991e', 'data_vg': 'ceph-a652979e-9f40-503a-bbc8-6de5e605991e'}, 'ansible_loop_var': 'item'})  2026-03-26 05:06:36.185542 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-b5eee7c3-8883-5bbe-be5a-75726e822543', 'data_vg': 'ceph-b5eee7c3-8883-5bbe-be5a-75726e822543'}, 'ansible_loop_var': 'item'})  2026-03-26 05:06:36.185552 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:06:36.185563 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-83c4def8-4703-5f7c-9549-7666ff9f2b66', 'data_vg': 'ceph-83c4def8-4703-5f7c-9549-7666ff9f2b66'}, 'ansible_loop_var': 'item'})  2026-03-26 05:06:36.185580 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-1fd8de68-da37-5e01-9bf2-5a04fcdcd771', 'data_vg': 'ceph-1fd8de68-da37-5e01-9bf2-5a04fcdcd771'}, 'ansible_loop_var': 'item'})  2026-03-26 05:06:45.897397 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:06:45.897535 | orchestrator | 2026-03-26 05:06:45.897553 | orchestrator | TASK [ceph-validate : Include check_eth_rgw.yml] ******************************* 2026-03-26 05:06:45.897567 | orchestrator | Thursday 26 March 2026 05:06:36 +0000 (0:00:01.384) 0:03:59.828 ******** 2026-03-26 05:06:45.897578 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:06:45.897589 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:06:45.897600 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:06:45.897611 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:06:45.897621 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:06:45.897632 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:06:45.897642 | orchestrator | skipping: [testbed-manager] 2026-03-26 05:06:45.897653 | orchestrator | 2026-03-26 05:06:45.897664 | orchestrator | TASK [ceph-validate : Include check_rgw_pools.yml] ***************************** 2026-03-26 05:06:45.897675 | orchestrator | Thursday 26 March 2026 05:06:38 +0000 (0:00:01.971) 0:04:01.800 ******** 2026-03-26 05:06:45.897685 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:06:45.897696 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:06:45.897707 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:06:45.897717 | orchestrator | skipping: [testbed-manager] 2026-03-26 05:06:45.897729 | orchestrator | included: /ansible/roles/ceph-validate/tasks/check_rgw_pools.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-26 05:06:45.897740 | orchestrator | 2026-03-26 05:06:45.897751 | orchestrator | TASK [ceph-validate : Fail if ec_profile is not set for ec pools] ************** 2026-03-26 05:06:45.897762 | orchestrator | Thursday 26 March 2026 05:06:40 +0000 (0:00:02.635) 0:04:04.435 ******** 2026-03-26 05:06:45.897795 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-26 05:06:45.897807 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-26 05:06:45.897818 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-26 05:06:45.897829 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-26 05:06:45.897840 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-26 05:06:45.897851 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:06:45.897861 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-26 05:06:45.897872 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-26 05:06:45.897883 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-26 05:06:45.897893 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-26 05:06:45.897904 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-26 05:06:45.897914 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:06:45.897925 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-26 05:06:45.897936 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-26 05:06:45.897948 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-26 05:06:45.897960 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-26 05:06:45.897973 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-26 05:06:45.897986 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:06:45.897998 | orchestrator | 2026-03-26 05:06:45.898073 | orchestrator | TASK [ceph-validate : Fail if ec_k is not set for ec pools] ******************** 2026-03-26 05:06:45.898088 | orchestrator | Thursday 26 March 2026 05:06:42 +0000 (0:00:01.442) 0:04:05.877 ******** 2026-03-26 05:06:45.898102 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-26 05:06:45.898114 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-26 05:06:45.898127 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-26 05:06:45.898139 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-26 05:06:45.898170 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-26 05:06:45.898183 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:06:45.898195 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-26 05:06:45.898217 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-26 05:06:45.898229 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-26 05:06:45.898242 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-26 05:06:45.898254 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-26 05:06:45.898266 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:06:45.898279 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-26 05:06:45.898291 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-26 05:06:45.898302 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-26 05:06:45.898312 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-26 05:06:45.898323 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-26 05:06:45.898333 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:06:45.898344 | orchestrator | 2026-03-26 05:06:45.898355 | orchestrator | TASK [ceph-validate : Fail if ec_m is not set for ec pools] ******************** 2026-03-26 05:06:45.898366 | orchestrator | Thursday 26 March 2026 05:06:43 +0000 (0:00:01.713) 0:04:07.591 ******** 2026-03-26 05:06:45.898376 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-26 05:06:45.898387 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-26 05:06:45.898398 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-26 05:06:45.898408 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-26 05:06:45.898419 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-26 05:06:45.898430 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:06:45.898440 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-26 05:06:45.898451 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-26 05:06:45.898481 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-26 05:06:45.898493 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-26 05:06:45.898503 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-26 05:06:45.898514 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:06:45.898525 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-26 05:06:45.898542 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-26 05:06:45.898553 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-26 05:06:45.898575 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-26 05:06:45.898585 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-26 05:06:45.898596 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:06:45.898606 | orchestrator | 2026-03-26 05:06:45.898617 | orchestrator | TASK [ceph-validate : Include check_nfs.yml] *********************************** 2026-03-26 05:06:45.898628 | orchestrator | Thursday 26 March 2026 05:06:45 +0000 (0:00:01.509) 0:04:09.100 ******** 2026-03-26 05:06:45.898639 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:06:45.898649 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:06:45.898666 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:07:02.249848 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:07:02.249990 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:07:02.250061 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:07:02.250125 | orchestrator | skipping: [testbed-manager] 2026-03-26 05:07:02.250153 | orchestrator | 2026-03-26 05:07:02.250175 | orchestrator | TASK [ceph-validate : Include check_rbdmirror.yml] ***************************** 2026-03-26 05:07:02.250195 | orchestrator | Thursday 26 March 2026 05:06:47 +0000 (0:00:01.861) 0:04:10.961 ******** 2026-03-26 05:07:02.250214 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:07:02.250229 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:07:02.250240 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:07:02.250251 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:07:02.250262 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:07:02.250273 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:07:02.250284 | orchestrator | skipping: [testbed-manager] 2026-03-26 05:07:02.250295 | orchestrator | 2026-03-26 05:07:02.250306 | orchestrator | TASK [ceph-validate : Fail if monitoring group doesn't exist] ****************** 2026-03-26 05:07:02.250317 | orchestrator | Thursday 26 March 2026 05:06:49 +0000 (0:00:02.202) 0:04:13.164 ******** 2026-03-26 05:07:02.250328 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:07:02.250338 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:07:02.250349 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:07:02.250360 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:07:02.250371 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:07:02.250383 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:07:02.250396 | orchestrator | skipping: [testbed-manager] 2026-03-26 05:07:02.250408 | orchestrator | 2026-03-26 05:07:02.250422 | orchestrator | TASK [ceph-validate : Fail when monitoring doesn't contain at least one node.] *** 2026-03-26 05:07:02.250443 | orchestrator | Thursday 26 March 2026 05:06:51 +0000 (0:00:02.090) 0:04:15.254 ******** 2026-03-26 05:07:02.250467 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:07:02.250519 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:07:02.250539 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:07:02.250556 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:07:02.250573 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:07:02.250592 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:07:02.250610 | orchestrator | skipping: [testbed-manager] 2026-03-26 05:07:02.250629 | orchestrator | 2026-03-26 05:07:02.250647 | orchestrator | TASK [ceph-validate : Fail when dashboard_admin_password and/or grafana_admin_password are not set] *** 2026-03-26 05:07:02.250667 | orchestrator | Thursday 26 March 2026 05:06:53 +0000 (0:00:02.016) 0:04:17.271 ******** 2026-03-26 05:07:02.250685 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:07:02.250705 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:07:02.250724 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:07:02.250743 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:07:02.250762 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:07:02.250779 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:07:02.250797 | orchestrator | skipping: [testbed-manager] 2026-03-26 05:07:02.250850 | orchestrator | 2026-03-26 05:07:02.250865 | orchestrator | TASK [ceph-validate : Validate container registry credentials] ***************** 2026-03-26 05:07:02.250877 | orchestrator | Thursday 26 March 2026 05:06:55 +0000 (0:00:02.121) 0:04:19.393 ******** 2026-03-26 05:07:02.250887 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:07:02.250898 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:07:02.250909 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:07:02.250920 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:07:02.250930 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:07:02.250941 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:07:02.250952 | orchestrator | skipping: [testbed-manager] 2026-03-26 05:07:02.250962 | orchestrator | 2026-03-26 05:07:02.250973 | orchestrator | TASK [ceph-validate : Validate container service and container package] ******** 2026-03-26 05:07:02.250984 | orchestrator | Thursday 26 March 2026 05:06:57 +0000 (0:00:02.059) 0:04:21.452 ******** 2026-03-26 05:07:02.250995 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:07:02.251006 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:07:02.251016 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:07:02.251027 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:07:02.251037 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:07:02.251048 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:07:02.251058 | orchestrator | skipping: [testbed-manager] 2026-03-26 05:07:02.251069 | orchestrator | 2026-03-26 05:07:02.251080 | orchestrator | TASK [ceph-validate : Validate openstack_keys key format] ********************** 2026-03-26 05:07:02.251091 | orchestrator | Thursday 26 March 2026 05:07:00 +0000 (0:00:02.271) 0:04:23.724 ******** 2026-03-26 05:07:02.251102 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-03-26 05:07:02.251158 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-03-26 05:07:02.251172 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-03-26 05:07:02.251184 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-03-26 05:07:02.251197 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-03-26 05:07:02.251210 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-03-26 05:07:02.251222 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:07:02.251254 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-03-26 05:07:02.251266 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-03-26 05:07:02.251277 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-03-26 05:07:02.251288 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-03-26 05:07:02.251299 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-03-26 05:07:02.251319 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-03-26 05:07:02.251330 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:07:02.251341 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-03-26 05:07:02.251352 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-03-26 05:07:02.251362 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-03-26 05:07:02.251373 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-03-26 05:07:02.251385 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-03-26 05:07:02.251396 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-03-26 05:07:02.251407 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:07:02.251521 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-03-26 05:07:02.251534 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-03-26 05:07:02.251544 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-03-26 05:07:02.251555 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-03-26 05:07:02.251566 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-03-26 05:07:02.251585 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-03-26 05:07:02.251596 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-03-26 05:07:02.251607 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-03-26 05:07:02.251617 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-03-26 05:07:02.251638 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-03-26 05:07:05.302642 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-03-26 05:07:05.302740 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-03-26 05:07:05.302783 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-03-26 05:07:05.302794 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:07:05.302807 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-03-26 05:07:05.302818 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-03-26 05:07:05.302828 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-03-26 05:07:05.302837 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-03-26 05:07:05.302847 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-03-26 05:07:05.302857 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-03-26 05:07:05.302866 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-03-26 05:07:05.302876 | orchestrator | skipping: [testbed-manager] 2026-03-26 05:07:05.302886 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-03-26 05:07:05.302895 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-03-26 05:07:05.302905 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:07:05.302915 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-03-26 05:07:05.302924 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-03-26 05:07:05.302934 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:07:05.302944 | orchestrator | 2026-03-26 05:07:05.302954 | orchestrator | TASK [ceph-validate : Validate clients keys key format] ************************ 2026-03-26 05:07:05.302965 | orchestrator | Thursday 26 March 2026 05:07:02 +0000 (0:00:02.175) 0:04:25.899 ******** 2026-03-26 05:07:05.302974 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:07:05.302983 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:07:05.302993 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:07:05.303002 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:07:05.303012 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:07:05.303022 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:07:05.303031 | orchestrator | skipping: [testbed-manager] 2026-03-26 05:07:05.303040 | orchestrator | 2026-03-26 05:07:05.303063 | orchestrator | TASK [ceph-validate : Validate openstack_keys caps] **************************** 2026-03-26 05:07:05.303074 | orchestrator | Thursday 26 March 2026 05:07:04 +0000 (0:00:02.159) 0:04:28.058 ******** 2026-03-26 05:07:05.303087 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-03-26 05:07:05.303121 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-03-26 05:07:05.303142 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-03-26 05:07:05.303179 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-03-26 05:07:05.303196 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-03-26 05:07:05.303214 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-03-26 05:07:05.303231 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:07:05.303249 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-03-26 05:07:05.303267 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-03-26 05:07:05.303282 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-03-26 05:07:05.303296 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-03-26 05:07:05.303306 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-03-26 05:07:05.303315 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-03-26 05:07:05.303325 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:07:05.303334 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-03-26 05:07:05.303343 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-03-26 05:07:05.303353 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-03-26 05:07:05.303362 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-03-26 05:07:05.303372 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-03-26 05:07:05.303381 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-03-26 05:07:05.303391 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:07:05.303400 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-03-26 05:07:05.303418 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-03-26 05:07:05.303434 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-03-26 05:07:05.303444 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-03-26 05:07:05.303453 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-03-26 05:07:05.303463 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-03-26 05:07:05.303476 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-03-26 05:07:05.303528 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-03-26 05:07:33.806665 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-03-26 05:07:33.806778 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-03-26 05:07:33.806793 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-03-26 05:07:33.806806 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-03-26 05:07:33.806824 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:07:33.806845 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-03-26 05:07:33.806864 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-03-26 05:07:33.806881 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-03-26 05:07:33.806898 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:07:33.806917 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-03-26 05:07:33.806934 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-03-26 05:07:33.806946 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-03-26 05:07:33.806955 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-03-26 05:07:33.806965 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-03-26 05:07:33.806997 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-03-26 05:07:33.807008 | orchestrator | skipping: [testbed-manager] 2026-03-26 05:07:33.807018 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-03-26 05:07:33.807027 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-03-26 05:07:33.807037 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-03-26 05:07:33.807061 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:07:33.807071 | orchestrator | 2026-03-26 05:07:33.807082 | orchestrator | TASK [ceph-validate : Validate clients keys caps] ****************************** 2026-03-26 05:07:33.807093 | orchestrator | Thursday 26 March 2026 05:07:06 +0000 (0:00:02.193) 0:04:30.252 ******** 2026-03-26 05:07:33.807103 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:07:33.807116 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:07:33.807132 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:07:33.807148 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:07:33.807165 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:07:33.807181 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:07:33.807198 | orchestrator | skipping: [testbed-manager] 2026-03-26 05:07:33.807215 | orchestrator | 2026-03-26 05:07:33.807233 | orchestrator | TASK [ceph-validate : Check virtual_ips is defined] **************************** 2026-03-26 05:07:33.807251 | orchestrator | Thursday 26 March 2026 05:07:08 +0000 (0:00:02.309) 0:04:32.562 ******** 2026-03-26 05:07:33.807263 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:07:33.807274 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:07:33.807285 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:07:33.807296 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:07:33.807306 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:07:33.807322 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:07:33.807339 | orchestrator | skipping: [testbed-manager] 2026-03-26 05:07:33.807358 | orchestrator | 2026-03-26 05:07:33.807375 | orchestrator | TASK [ceph-validate : Validate virtual_ips length] ***************************** 2026-03-26 05:07:33.807407 | orchestrator | Thursday 26 March 2026 05:07:10 +0000 (0:00:02.062) 0:04:34.624 ******** 2026-03-26 05:07:33.807419 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:07:33.807430 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:07:33.807441 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:07:33.807451 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:07:33.807462 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:07:33.807473 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:07:33.807484 | orchestrator | skipping: [testbed-manager] 2026-03-26 05:07:33.807497 | orchestrator | 2026-03-26 05:07:33.807515 | orchestrator | TASK [ceph-container-engine : Include pre_requisites/prerequisites.yml] ******** 2026-03-26 05:07:33.807532 | orchestrator | Thursday 26 March 2026 05:07:13 +0000 (0:00:02.258) 0:04:36.883 ******** 2026-03-26 05:07:33.807581 | orchestrator | included: /ansible/roles/ceph-container-engine/tasks/pre_requisites/prerequisites.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2026-03-26 05:07:33.807599 | orchestrator | 2026-03-26 05:07:33.807614 | orchestrator | TASK [ceph-container-engine : Include specific variables] ********************** 2026-03-26 05:07:33.807624 | orchestrator | Thursday 26 March 2026 05:07:16 +0000 (0:00:02.867) 0:04:39.750 ******** 2026-03-26 05:07:33.807645 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-03-26 05:07:33.807655 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-03-26 05:07:33.807665 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-03-26 05:07:33.807674 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-03-26 05:07:33.807684 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-03-26 05:07:33.807693 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-03-26 05:07:33.807702 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-03-26 05:07:33.807712 | orchestrator | 2026-03-26 05:07:33.807721 | orchestrator | TASK [ceph-container-engine : Create the systemd docker override directory] **** 2026-03-26 05:07:33.807730 | orchestrator | Thursday 26 March 2026 05:07:18 +0000 (0:00:02.053) 0:04:41.804 ******** 2026-03-26 05:07:33.807740 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:07:33.807749 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:07:33.807759 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:07:33.807768 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:07:33.807778 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:07:33.807787 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:07:33.807797 | orchestrator | skipping: [testbed-manager] 2026-03-26 05:07:33.807806 | orchestrator | 2026-03-26 05:07:33.807816 | orchestrator | TASK [ceph-container-engine : Create the systemd docker override file] ********* 2026-03-26 05:07:33.807825 | orchestrator | Thursday 26 March 2026 05:07:20 +0000 (0:00:02.167) 0:04:43.971 ******** 2026-03-26 05:07:33.807835 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:07:33.807844 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:07:33.807853 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:07:33.807862 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:07:33.807872 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:07:33.807882 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:07:33.807891 | orchestrator | skipping: [testbed-manager] 2026-03-26 05:07:33.807900 | orchestrator | 2026-03-26 05:07:33.807910 | orchestrator | TASK [ceph-container-engine : Remove docker proxy configuration] *************** 2026-03-26 05:07:33.807919 | orchestrator | Thursday 26 March 2026 05:07:22 +0000 (0:00:01.993) 0:04:45.965 ******** 2026-03-26 05:07:33.807929 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:07:33.807939 | orchestrator | ok: [testbed-node-1] 2026-03-26 05:07:33.807948 | orchestrator | ok: [testbed-node-2] 2026-03-26 05:07:33.807957 | orchestrator | ok: [testbed-node-3] 2026-03-26 05:07:33.807967 | orchestrator | ok: [testbed-node-4] 2026-03-26 05:07:33.807976 | orchestrator | ok: [testbed-node-5] 2026-03-26 05:07:33.807986 | orchestrator | ok: [testbed-manager] 2026-03-26 05:07:33.807995 | orchestrator | 2026-03-26 05:07:33.808005 | orchestrator | TASK [ceph-container-engine : Restart docker] ********************************** 2026-03-26 05:07:33.808014 | orchestrator | Thursday 26 March 2026 05:07:24 +0000 (0:00:02.604) 0:04:48.570 ******** 2026-03-26 05:07:33.808023 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:07:33.808033 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:07:33.808042 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:07:33.808052 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:07:33.808067 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:07:33.808077 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:07:33.808086 | orchestrator | skipping: [testbed-manager] 2026-03-26 05:07:33.808096 | orchestrator | 2026-03-26 05:07:33.808105 | orchestrator | TASK [ceph-container-common : Container registry authentication] *************** 2026-03-26 05:07:33.808115 | orchestrator | Thursday 26 March 2026 05:07:27 +0000 (0:00:02.420) 0:04:50.990 ******** 2026-03-26 05:07:33.808124 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:07:33.808133 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:07:33.808149 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:07:33.808161 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:07:33.808183 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:07:33.808201 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:07:33.808218 | orchestrator | skipping: [testbed-manager] 2026-03-26 05:07:33.808235 | orchestrator | 2026-03-26 05:07:33.808251 | orchestrator | TASK [Get the ceph release being deployed] ************************************* 2026-03-26 05:07:33.808263 | orchestrator | Thursday 26 March 2026 05:07:29 +0000 (0:00:02.355) 0:04:53.345 ******** 2026-03-26 05:07:33.808278 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:07:33.808294 | orchestrator | 2026-03-26 05:07:33.808311 | orchestrator | TASK [Check ceph release being deployed] *************************************** 2026-03-26 05:07:33.808328 | orchestrator | Thursday 26 March 2026 05:07:32 +0000 (0:00:02.637) 0:04:55.982 ******** 2026-03-26 05:07:33.808345 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:07:33.808362 | orchestrator | 2026-03-26 05:07:33.808389 | orchestrator | PLAY [Ensure cluster config is applied] **************************************** 2026-03-26 05:08:13.498501 | orchestrator | 2026-03-26 05:08:13.498709 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-26 05:08:13.498742 | orchestrator | Thursday 26 March 2026 05:07:33 +0000 (0:00:01.466) 0:04:57.450 ******** 2026-03-26 05:08:13.498764 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:08:13.498786 | orchestrator | 2026-03-26 05:08:13.498806 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-26 05:08:13.498824 | orchestrator | Thursday 26 March 2026 05:07:35 +0000 (0:00:01.455) 0:04:58.905 ******** 2026-03-26 05:08:13.498835 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:08:13.498846 | orchestrator | 2026-03-26 05:08:13.498857 | orchestrator | TASK [Set cluster configs] ***************************************************** 2026-03-26 05:08:13.498868 | orchestrator | Thursday 26 March 2026 05:07:36 +0000 (0:00:01.150) 0:05:00.056 ******** 2026-03-26 05:08:13.498883 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__77be735a24b84ab013e2e181233ee69b9d9f8b69'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-03-26 05:08:13.498897 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__77be735a24b84ab013e2e181233ee69b9d9f8b69'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-03-26 05:08:13.498908 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__77be735a24b84ab013e2e181233ee69b9d9f8b69'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-03-26 05:08:13.498919 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__77be735a24b84ab013e2e181233ee69b9d9f8b69'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-03-26 05:08:13.498933 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__77be735a24b84ab013e2e181233ee69b9d9f8b69'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-03-26 05:08:13.498952 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__77be735a24b84ab013e2e181233ee69b9d9f8b69'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__77be735a24b84ab013e2e181233ee69b9d9f8b69'}])  2026-03-26 05:08:13.499005 | orchestrator | 2026-03-26 05:08:13.499018 | orchestrator | PLAY [Upgrade ceph mon cluster] ************************************************ 2026-03-26 05:08:13.499029 | orchestrator | 2026-03-26 05:08:13.499043 | orchestrator | TASK [Remove ceph aliases] ***************************************************** 2026-03-26 05:08:13.499071 | orchestrator | Thursday 26 March 2026 05:07:46 +0000 (0:00:10.316) 0:05:10.373 ******** 2026-03-26 05:08:13.499084 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:08:13.499097 | orchestrator | 2026-03-26 05:08:13.499109 | orchestrator | TASK [Set mon_host_count] ****************************************************** 2026-03-26 05:08:13.499121 | orchestrator | Thursday 26 March 2026 05:07:48 +0000 (0:00:01.489) 0:05:11.862 ******** 2026-03-26 05:08:13.499134 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:08:13.499147 | orchestrator | 2026-03-26 05:08:13.499158 | orchestrator | TASK [Fail when less than three monitors] ************************************** 2026-03-26 05:08:13.499169 | orchestrator | Thursday 26 March 2026 05:07:49 +0000 (0:00:01.151) 0:05:13.014 ******** 2026-03-26 05:08:13.499180 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:08:13.499192 | orchestrator | 2026-03-26 05:08:13.499202 | orchestrator | TASK [Select a running monitor] ************************************************ 2026-03-26 05:08:13.499213 | orchestrator | Thursday 26 March 2026 05:07:50 +0000 (0:00:01.210) 0:05:14.224 ******** 2026-03-26 05:08:13.499223 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:08:13.499234 | orchestrator | 2026-03-26 05:08:13.499245 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-26 05:08:13.499255 | orchestrator | Thursday 26 March 2026 05:07:51 +0000 (0:00:01.156) 0:05:15.381 ******** 2026-03-26 05:08:13.499266 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0 2026-03-26 05:08:13.499276 | orchestrator | 2026-03-26 05:08:13.499287 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-26 05:08:13.499318 | orchestrator | Thursday 26 March 2026 05:07:52 +0000 (0:00:01.184) 0:05:16.566 ******** 2026-03-26 05:08:13.499330 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:08:13.499341 | orchestrator | 2026-03-26 05:08:13.499352 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-26 05:08:13.499363 | orchestrator | Thursday 26 March 2026 05:07:54 +0000 (0:00:01.428) 0:05:17.994 ******** 2026-03-26 05:08:13.499373 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:08:13.499384 | orchestrator | 2026-03-26 05:08:13.499395 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-26 05:08:13.499405 | orchestrator | Thursday 26 March 2026 05:07:55 +0000 (0:00:01.122) 0:05:19.117 ******** 2026-03-26 05:08:13.499416 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:08:13.499426 | orchestrator | 2026-03-26 05:08:13.499437 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-26 05:08:13.499448 | orchestrator | Thursday 26 March 2026 05:07:56 +0000 (0:00:01.409) 0:05:20.527 ******** 2026-03-26 05:08:13.499458 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:08:13.499469 | orchestrator | 2026-03-26 05:08:13.499479 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-26 05:08:13.499490 | orchestrator | Thursday 26 March 2026 05:07:58 +0000 (0:00:01.204) 0:05:21.732 ******** 2026-03-26 05:08:13.499501 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:08:13.499511 | orchestrator | 2026-03-26 05:08:13.499522 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-26 05:08:13.499533 | orchestrator | Thursday 26 March 2026 05:07:59 +0000 (0:00:01.173) 0:05:22.905 ******** 2026-03-26 05:08:13.499543 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:08:13.499554 | orchestrator | 2026-03-26 05:08:13.499564 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-26 05:08:13.499586 | orchestrator | Thursday 26 March 2026 05:08:00 +0000 (0:00:01.276) 0:05:24.181 ******** 2026-03-26 05:08:13.499597 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:08:13.499608 | orchestrator | 2026-03-26 05:08:13.499640 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-26 05:08:13.499651 | orchestrator | Thursday 26 March 2026 05:08:01 +0000 (0:00:01.114) 0:05:25.296 ******** 2026-03-26 05:08:13.499662 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:08:13.499673 | orchestrator | 2026-03-26 05:08:13.499683 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-26 05:08:13.499694 | orchestrator | Thursday 26 March 2026 05:08:02 +0000 (0:00:01.193) 0:05:26.489 ******** 2026-03-26 05:08:13.499705 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-26 05:08:13.499716 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-26 05:08:13.499726 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-26 05:08:13.499737 | orchestrator | 2026-03-26 05:08:13.499748 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-26 05:08:13.499759 | orchestrator | Thursday 26 March 2026 05:08:04 +0000 (0:00:01.716) 0:05:28.206 ******** 2026-03-26 05:08:13.499770 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:08:13.499780 | orchestrator | 2026-03-26 05:08:13.499791 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-26 05:08:13.499802 | orchestrator | Thursday 26 March 2026 05:08:05 +0000 (0:00:01.228) 0:05:29.434 ******** 2026-03-26 05:08:13.499813 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-26 05:08:13.499824 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-26 05:08:13.499835 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-26 05:08:13.499845 | orchestrator | 2026-03-26 05:08:13.499856 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-26 05:08:13.499867 | orchestrator | Thursday 26 March 2026 05:08:09 +0000 (0:00:03.287) 0:05:32.722 ******** 2026-03-26 05:08:13.499878 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-26 05:08:13.499888 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-26 05:08:13.499899 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-26 05:08:13.499910 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:08:13.499921 | orchestrator | 2026-03-26 05:08:13.499932 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-26 05:08:13.499942 | orchestrator | Thursday 26 March 2026 05:08:10 +0000 (0:00:01.380) 0:05:34.102 ******** 2026-03-26 05:08:13.499961 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-26 05:08:13.499974 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-26 05:08:13.499986 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-26 05:08:13.499997 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:08:13.500008 | orchestrator | 2026-03-26 05:08:13.500019 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-26 05:08:13.500030 | orchestrator | Thursday 26 March 2026 05:08:12 +0000 (0:00:01.865) 0:05:35.968 ******** 2026-03-26 05:08:13.500049 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-26 05:08:33.614711 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-26 05:08:33.614848 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-26 05:08:33.614861 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:08:33.614871 | orchestrator | 2026-03-26 05:08:33.614880 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-26 05:08:33.614890 | orchestrator | Thursday 26 March 2026 05:08:13 +0000 (0:00:01.175) 0:05:37.144 ******** 2026-03-26 05:08:33.614900 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'c1b85917b265', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-26 05:08:06.328851', 'end': '2026-03-26 05:08:06.373241', 'delta': '0:00:00.044390', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['c1b85917b265'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-26 05:08:33.614913 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '1fb5a820b9f6', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-26 05:08:06.897478', 'end': '2026-03-26 05:08:06.954848', 'delta': '0:00:00.057370', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['1fb5a820b9f6'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-26 05:08:33.614939 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '2a382ea60872', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-26 05:08:07.774569', 'end': '2026-03-26 05:08:07.828855', 'delta': '0:00:00.054286', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['2a382ea60872'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-26 05:08:33.614948 | orchestrator | 2026-03-26 05:08:33.614957 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-26 05:08:33.614964 | orchestrator | Thursday 26 March 2026 05:08:14 +0000 (0:00:01.215) 0:05:38.360 ******** 2026-03-26 05:08:33.614996 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:08:33.615006 | orchestrator | 2026-03-26 05:08:33.615014 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-26 05:08:33.615021 | orchestrator | Thursday 26 March 2026 05:08:16 +0000 (0:00:01.615) 0:05:39.975 ******** 2026-03-26 05:08:33.615029 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:08:33.615037 | orchestrator | 2026-03-26 05:08:33.615044 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-26 05:08:33.615052 | orchestrator | Thursday 26 March 2026 05:08:17 +0000 (0:00:01.231) 0:05:41.207 ******** 2026-03-26 05:08:33.615061 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:08:33.615068 | orchestrator | 2026-03-26 05:08:33.615076 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-26 05:08:33.615084 | orchestrator | Thursday 26 March 2026 05:08:18 +0000 (0:00:01.138) 0:05:42.346 ******** 2026-03-26 05:08:33.615109 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] 2026-03-26 05:08:33.615117 | orchestrator | 2026-03-26 05:08:33.615125 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-26 05:08:33.615133 | orchestrator | Thursday 26 March 2026 05:08:20 +0000 (0:00:02.054) 0:05:44.401 ******** 2026-03-26 05:08:33.615140 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:08:33.615148 | orchestrator | 2026-03-26 05:08:33.615156 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-26 05:08:33.615163 | orchestrator | Thursday 26 March 2026 05:08:21 +0000 (0:00:01.198) 0:05:45.599 ******** 2026-03-26 05:08:33.615173 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:08:33.615188 | orchestrator | 2026-03-26 05:08:33.615203 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-26 05:08:33.615217 | orchestrator | Thursday 26 March 2026 05:08:23 +0000 (0:00:01.203) 0:05:46.803 ******** 2026-03-26 05:08:33.615231 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:08:33.615246 | orchestrator | 2026-03-26 05:08:33.615260 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-26 05:08:33.615274 | orchestrator | Thursday 26 March 2026 05:08:24 +0000 (0:00:01.285) 0:05:48.088 ******** 2026-03-26 05:08:33.615288 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:08:33.615302 | orchestrator | 2026-03-26 05:08:33.615318 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-26 05:08:33.615332 | orchestrator | Thursday 26 March 2026 05:08:25 +0000 (0:00:01.162) 0:05:49.250 ******** 2026-03-26 05:08:33.615347 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:08:33.615363 | orchestrator | 2026-03-26 05:08:33.615377 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-26 05:08:33.615392 | orchestrator | Thursday 26 March 2026 05:08:26 +0000 (0:00:01.126) 0:05:50.377 ******** 2026-03-26 05:08:33.615403 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:08:33.615412 | orchestrator | 2026-03-26 05:08:33.615421 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-26 05:08:33.615430 | orchestrator | Thursday 26 March 2026 05:08:27 +0000 (0:00:01.134) 0:05:51.512 ******** 2026-03-26 05:08:33.615439 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:08:33.615448 | orchestrator | 2026-03-26 05:08:33.615457 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-26 05:08:33.615466 | orchestrator | Thursday 26 March 2026 05:08:28 +0000 (0:00:01.116) 0:05:52.628 ******** 2026-03-26 05:08:33.615475 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:08:33.615484 | orchestrator | 2026-03-26 05:08:33.615492 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-26 05:08:33.615502 | orchestrator | Thursday 26 March 2026 05:08:30 +0000 (0:00:01.133) 0:05:53.762 ******** 2026-03-26 05:08:33.615511 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:08:33.615520 | orchestrator | 2026-03-26 05:08:33.615530 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-26 05:08:33.615538 | orchestrator | Thursday 26 March 2026 05:08:31 +0000 (0:00:01.128) 0:05:54.891 ******** 2026-03-26 05:08:33.615555 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:08:33.615563 | orchestrator | 2026-03-26 05:08:33.615571 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-26 05:08:33.615579 | orchestrator | Thursday 26 March 2026 05:08:32 +0000 (0:00:01.139) 0:05:56.031 ******** 2026-03-26 05:08:33.615587 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:08:33.615602 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:08:33.615610 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:08:33.615621 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-26-01-38-12-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-26 05:08:33.615665 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:08:34.852131 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:08:34.852273 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:08:34.852322 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c374eb4c-3572-4f0b-927c-38d35765f44a', 'scsi-SQEMU_QEMU_HARDDISK_c374eb4c-3572-4f0b-927c-38d35765f44a'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'c374eb4c', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c374eb4c-3572-4f0b-927c-38d35765f44a-part16', 'scsi-SQEMU_QEMU_HARDDISK_c374eb4c-3572-4f0b-927c-38d35765f44a-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c374eb4c-3572-4f0b-927c-38d35765f44a-part14', 'scsi-SQEMU_QEMU_HARDDISK_c374eb4c-3572-4f0b-927c-38d35765f44a-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c374eb4c-3572-4f0b-927c-38d35765f44a-part15', 'scsi-SQEMU_QEMU_HARDDISK_c374eb4c-3572-4f0b-927c-38d35765f44a-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c374eb4c-3572-4f0b-927c-38d35765f44a-part1', 'scsi-SQEMU_QEMU_HARDDISK_c374eb4c-3572-4f0b-927c-38d35765f44a-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-26 05:08:34.852368 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:08:34.852381 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:08:34.852394 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:08:34.852408 | orchestrator | 2026-03-26 05:08:34.852452 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-26 05:08:34.852464 | orchestrator | Thursday 26 March 2026 05:08:33 +0000 (0:00:01.223) 0:05:57.254 ******** 2026-03-26 05:08:34.852500 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:08:34.852516 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:08:34.852536 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:08:34.852555 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-26-01-38-12-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:08:34.852569 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:08:34.852580 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:08:34.852602 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:08:58.959919 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c374eb4c-3572-4f0b-927c-38d35765f44a', 'scsi-SQEMU_QEMU_HARDDISK_c374eb4c-3572-4f0b-927c-38d35765f44a'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'c374eb4c', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c374eb4c-3572-4f0b-927c-38d35765f44a-part16', 'scsi-SQEMU_QEMU_HARDDISK_c374eb4c-3572-4f0b-927c-38d35765f44a-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c374eb4c-3572-4f0b-927c-38d35765f44a-part14', 'scsi-SQEMU_QEMU_HARDDISK_c374eb4c-3572-4f0b-927c-38d35765f44a-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c374eb4c-3572-4f0b-927c-38d35765f44a-part15', 'scsi-SQEMU_QEMU_HARDDISK_c374eb4c-3572-4f0b-927c-38d35765f44a-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c374eb4c-3572-4f0b-927c-38d35765f44a-part1', 'scsi-SQEMU_QEMU_HARDDISK_c374eb4c-3572-4f0b-927c-38d35765f44a-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:08:58.960079 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:08:58.960092 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:08:58.960102 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:08:58.960112 | orchestrator | 2026-03-26 05:08:58.960122 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-26 05:08:58.960131 | orchestrator | Thursday 26 March 2026 05:08:34 +0000 (0:00:01.245) 0:05:58.500 ******** 2026-03-26 05:08:58.960139 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:08:58.960148 | orchestrator | 2026-03-26 05:08:58.960156 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-26 05:08:58.960164 | orchestrator | Thursday 26 March 2026 05:08:36 +0000 (0:00:01.553) 0:06:00.054 ******** 2026-03-26 05:08:58.960172 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:08:58.960180 | orchestrator | 2026-03-26 05:08:58.960188 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-26 05:08:58.960211 | orchestrator | Thursday 26 March 2026 05:08:37 +0000 (0:00:01.144) 0:06:01.199 ******** 2026-03-26 05:08:58.960229 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:08:58.960237 | orchestrator | 2026-03-26 05:08:58.960245 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-26 05:08:58.960253 | orchestrator | Thursday 26 March 2026 05:08:39 +0000 (0:00:01.521) 0:06:02.720 ******** 2026-03-26 05:08:58.960261 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:08:58.960269 | orchestrator | 2026-03-26 05:08:58.960277 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-26 05:08:58.960284 | orchestrator | Thursday 26 March 2026 05:08:40 +0000 (0:00:01.103) 0:06:03.824 ******** 2026-03-26 05:08:58.960292 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:08:58.960300 | orchestrator | 2026-03-26 05:08:58.960308 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-26 05:08:58.960315 | orchestrator | Thursday 26 March 2026 05:08:41 +0000 (0:00:01.244) 0:06:05.068 ******** 2026-03-26 05:08:58.960323 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:08:58.960331 | orchestrator | 2026-03-26 05:08:58.960339 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-26 05:08:58.960346 | orchestrator | Thursday 26 March 2026 05:08:42 +0000 (0:00:01.198) 0:06:06.267 ******** 2026-03-26 05:08:58.960354 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-26 05:08:58.960362 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-26 05:08:58.960370 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-26 05:08:58.960378 | orchestrator | 2026-03-26 05:08:58.960386 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-26 05:08:58.960394 | orchestrator | Thursday 26 March 2026 05:08:44 +0000 (0:00:01.994) 0:06:08.262 ******** 2026-03-26 05:08:58.960403 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-26 05:08:58.960411 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-26 05:08:58.960421 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-26 05:08:58.960430 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:08:58.960439 | orchestrator | 2026-03-26 05:08:58.960448 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-26 05:08:58.960457 | orchestrator | Thursday 26 March 2026 05:08:45 +0000 (0:00:01.173) 0:06:09.435 ******** 2026-03-26 05:08:58.960467 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:08:58.960476 | orchestrator | 2026-03-26 05:08:58.960484 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-26 05:08:58.960494 | orchestrator | Thursday 26 March 2026 05:08:46 +0000 (0:00:01.145) 0:06:10.581 ******** 2026-03-26 05:08:58.960503 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-26 05:08:58.960512 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-26 05:08:58.960521 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-26 05:08:58.960536 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-26 05:08:58.960546 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-26 05:08:58.960555 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-26 05:08:58.960563 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-26 05:08:58.960572 | orchestrator | 2026-03-26 05:08:58.960581 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-26 05:08:58.960590 | orchestrator | Thursday 26 March 2026 05:08:49 +0000 (0:00:02.201) 0:06:12.782 ******** 2026-03-26 05:08:58.960598 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-26 05:08:58.960607 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-26 05:08:58.960616 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-26 05:08:58.960635 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-26 05:08:58.960643 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-26 05:08:58.960653 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-26 05:08:58.960662 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-26 05:08:58.960671 | orchestrator | 2026-03-26 05:08:58.960721 | orchestrator | TASK [Get ceph cluster status] ************************************************* 2026-03-26 05:08:58.960736 | orchestrator | Thursday 26 March 2026 05:08:52 +0000 (0:00:02.906) 0:06:15.689 ******** 2026-03-26 05:08:58.960751 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] 2026-03-26 05:08:58.960764 | orchestrator | 2026-03-26 05:08:58.960778 | orchestrator | TASK [Display ceph health detail] ********************************************** 2026-03-26 05:08:58.960788 | orchestrator | Thursday 26 March 2026 05:08:54 +0000 (0:00:02.285) 0:06:17.974 ******** 2026-03-26 05:08:58.960798 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:08:58.960807 | orchestrator | 2026-03-26 05:08:58.960817 | orchestrator | TASK [Fail if cluster isn't in an acceptable state] **************************** 2026-03-26 05:08:58.960826 | orchestrator | Thursday 26 March 2026 05:08:55 +0000 (0:00:01.227) 0:06:19.202 ******** 2026-03-26 05:08:58.960835 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:08:58.960845 | orchestrator | 2026-03-26 05:08:58.960854 | orchestrator | TASK [Get the ceph quorum status] ********************************************** 2026-03-26 05:08:58.960864 | orchestrator | Thursday 26 March 2026 05:08:56 +0000 (0:00:01.140) 0:06:20.342 ******** 2026-03-26 05:08:58.960873 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] 2026-03-26 05:08:58.960883 | orchestrator | 2026-03-26 05:08:58.960892 | orchestrator | TASK [Fail if the cluster quorum isn't in an acceptable state] ***************** 2026-03-26 05:08:58.960910 | orchestrator | Thursday 26 March 2026 05:08:58 +0000 (0:00:02.264) 0:06:22.607 ******** 2026-03-26 05:10:00.608854 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:10:00.608988 | orchestrator | 2026-03-26 05:10:00.609008 | orchestrator | TASK [Ensure /var/lib/ceph/bootstrap-rbd-mirror is present] ******************** 2026-03-26 05:10:00.609025 | orchestrator | Thursday 26 March 2026 05:09:00 +0000 (0:00:01.162) 0:06:23.769 ******** 2026-03-26 05:10:00.609041 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-26 05:10:00.609057 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-26 05:10:00.609074 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-26 05:10:00.609088 | orchestrator | 2026-03-26 05:10:00.609101 | orchestrator | TASK [Create potentially missing keys (rbd and rbd-mirror)] ******************** 2026-03-26 05:10:00.609114 | orchestrator | Thursday 26 March 2026 05:09:02 +0000 (0:00:02.576) 0:06:26.346 ******** 2026-03-26 05:10:00.609129 | orchestrator | ok: [testbed-node-0] => (item=['bootstrap-rbd', 'testbed-node-0']) 2026-03-26 05:10:00.609144 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=['bootstrap-rbd', 'testbed-node-1']) 2026-03-26 05:10:00.609161 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=['bootstrap-rbd', 'testbed-node-2']) 2026-03-26 05:10:00.609176 | orchestrator | ok: [testbed-node-0] => (item=['bootstrap-rbd-mirror', 'testbed-node-0']) 2026-03-26 05:10:00.609192 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=['bootstrap-rbd-mirror', 'testbed-node-1']) 2026-03-26 05:10:00.609210 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=['bootstrap-rbd-mirror', 'testbed-node-2']) 2026-03-26 05:10:00.609224 | orchestrator | 2026-03-26 05:10:00.609236 | orchestrator | TASK [Stop ceph mon] *********************************************************** 2026-03-26 05:10:00.609249 | orchestrator | Thursday 26 March 2026 05:09:16 +0000 (0:00:13.506) 0:06:39.853 ******** 2026-03-26 05:10:00.609262 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2026-03-26 05:10:00.609274 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-26 05:10:00.609309 | orchestrator | 2026-03-26 05:10:00.609320 | orchestrator | TASK [Mask the mgr service] **************************************************** 2026-03-26 05:10:00.609332 | orchestrator | Thursday 26 March 2026 05:09:19 +0000 (0:00:03.753) 0:06:43.607 ******** 2026-03-26 05:10:00.609343 | orchestrator | changed: [testbed-node-0] 2026-03-26 05:10:00.609355 | orchestrator | 2026-03-26 05:10:00.609368 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-26 05:10:00.609382 | orchestrator | Thursday 26 March 2026 05:09:22 +0000 (0:00:02.565) 0:06:46.173 ******** 2026-03-26 05:10:00.609394 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0 2026-03-26 05:10:00.609408 | orchestrator | 2026-03-26 05:10:00.609436 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-26 05:10:00.609448 | orchestrator | Thursday 26 March 2026 05:09:24 +0000 (0:00:01.549) 0:06:47.723 ******** 2026-03-26 05:10:00.609460 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0 2026-03-26 05:10:00.609473 | orchestrator | 2026-03-26 05:10:00.609486 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-26 05:10:00.609500 | orchestrator | Thursday 26 March 2026 05:09:25 +0000 (0:00:01.598) 0:06:49.322 ******** 2026-03-26 05:10:00.609514 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:10:00.609527 | orchestrator | 2026-03-26 05:10:00.609539 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-26 05:10:00.609551 | orchestrator | Thursday 26 March 2026 05:09:27 +0000 (0:00:01.528) 0:06:50.850 ******** 2026-03-26 05:10:00.609563 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:10:00.609575 | orchestrator | 2026-03-26 05:10:00.609586 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-26 05:10:00.609616 | orchestrator | Thursday 26 March 2026 05:09:28 +0000 (0:00:01.109) 0:06:51.960 ******** 2026-03-26 05:10:00.609636 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:10:00.609648 | orchestrator | 2026-03-26 05:10:00.609660 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-26 05:10:00.609672 | orchestrator | Thursday 26 March 2026 05:09:29 +0000 (0:00:01.111) 0:06:53.071 ******** 2026-03-26 05:10:00.609684 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:10:00.609696 | orchestrator | 2026-03-26 05:10:00.609708 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-26 05:10:00.609721 | orchestrator | Thursday 26 March 2026 05:09:30 +0000 (0:00:01.150) 0:06:54.222 ******** 2026-03-26 05:10:00.609731 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:10:00.609743 | orchestrator | 2026-03-26 05:10:00.609754 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-26 05:10:00.609766 | orchestrator | Thursday 26 March 2026 05:09:32 +0000 (0:00:01.605) 0:06:55.828 ******** 2026-03-26 05:10:00.609845 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:10:00.609857 | orchestrator | 2026-03-26 05:10:00.609870 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-26 05:10:00.609883 | orchestrator | Thursday 26 March 2026 05:09:33 +0000 (0:00:01.146) 0:06:56.975 ******** 2026-03-26 05:10:00.609895 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:10:00.609908 | orchestrator | 2026-03-26 05:10:00.609921 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-26 05:10:00.609934 | orchestrator | Thursday 26 March 2026 05:09:34 +0000 (0:00:01.103) 0:06:58.078 ******** 2026-03-26 05:10:00.609945 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:10:00.609956 | orchestrator | 2026-03-26 05:10:00.609968 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-26 05:10:00.609980 | orchestrator | Thursday 26 March 2026 05:09:36 +0000 (0:00:01.650) 0:06:59.729 ******** 2026-03-26 05:10:00.609991 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:10:00.610004 | orchestrator | 2026-03-26 05:10:00.610100 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-26 05:10:00.610129 | orchestrator | Thursday 26 March 2026 05:09:37 +0000 (0:00:01.553) 0:07:01.282 ******** 2026-03-26 05:10:00.610142 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:10:00.610155 | orchestrator | 2026-03-26 05:10:00.610167 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-26 05:10:00.610178 | orchestrator | Thursday 26 March 2026 05:09:38 +0000 (0:00:01.188) 0:07:02.471 ******** 2026-03-26 05:10:00.610188 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:10:00.610200 | orchestrator | 2026-03-26 05:10:00.610210 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-26 05:10:00.610222 | orchestrator | Thursday 26 March 2026 05:09:39 +0000 (0:00:01.181) 0:07:03.653 ******** 2026-03-26 05:10:00.610234 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:10:00.610246 | orchestrator | 2026-03-26 05:10:00.610259 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-26 05:10:00.610271 | orchestrator | Thursday 26 March 2026 05:09:41 +0000 (0:00:01.153) 0:07:04.807 ******** 2026-03-26 05:10:00.610283 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:10:00.610295 | orchestrator | 2026-03-26 05:10:00.610308 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-26 05:10:00.610318 | orchestrator | Thursday 26 March 2026 05:09:42 +0000 (0:00:01.111) 0:07:05.918 ******** 2026-03-26 05:10:00.610329 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:10:00.610338 | orchestrator | 2026-03-26 05:10:00.610349 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-26 05:10:00.610359 | orchestrator | Thursday 26 March 2026 05:09:43 +0000 (0:00:01.152) 0:07:07.070 ******** 2026-03-26 05:10:00.610369 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:10:00.610380 | orchestrator | 2026-03-26 05:10:00.610390 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-26 05:10:00.610401 | orchestrator | Thursday 26 March 2026 05:09:44 +0000 (0:00:01.133) 0:07:08.204 ******** 2026-03-26 05:10:00.610412 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:10:00.610423 | orchestrator | 2026-03-26 05:10:00.610433 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-26 05:10:00.610444 | orchestrator | Thursday 26 March 2026 05:09:45 +0000 (0:00:01.162) 0:07:09.366 ******** 2026-03-26 05:10:00.610454 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:10:00.610465 | orchestrator | 2026-03-26 05:10:00.610477 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-26 05:10:00.610488 | orchestrator | Thursday 26 March 2026 05:09:46 +0000 (0:00:01.121) 0:07:10.488 ******** 2026-03-26 05:10:00.610499 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:10:00.610510 | orchestrator | 2026-03-26 05:10:00.610521 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-26 05:10:00.610531 | orchestrator | Thursday 26 March 2026 05:09:47 +0000 (0:00:01.163) 0:07:11.652 ******** 2026-03-26 05:10:00.610543 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:10:00.610555 | orchestrator | 2026-03-26 05:10:00.610566 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-03-26 05:10:00.610585 | orchestrator | Thursday 26 March 2026 05:09:49 +0000 (0:00:01.175) 0:07:12.828 ******** 2026-03-26 05:10:00.610597 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:10:00.610608 | orchestrator | 2026-03-26 05:10:00.610620 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-03-26 05:10:00.610631 | orchestrator | Thursday 26 March 2026 05:09:50 +0000 (0:00:01.104) 0:07:13.933 ******** 2026-03-26 05:10:00.610643 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:10:00.610655 | orchestrator | 2026-03-26 05:10:00.610667 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-03-26 05:10:00.610679 | orchestrator | Thursday 26 March 2026 05:09:51 +0000 (0:00:01.120) 0:07:15.053 ******** 2026-03-26 05:10:00.610691 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:10:00.610702 | orchestrator | 2026-03-26 05:10:00.610714 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-03-26 05:10:00.610736 | orchestrator | Thursday 26 March 2026 05:09:52 +0000 (0:00:01.141) 0:07:16.195 ******** 2026-03-26 05:10:00.610748 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:10:00.610760 | orchestrator | 2026-03-26 05:10:00.610793 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-03-26 05:10:00.610805 | orchestrator | Thursday 26 March 2026 05:09:53 +0000 (0:00:01.173) 0:07:17.369 ******** 2026-03-26 05:10:00.610817 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:10:00.610829 | orchestrator | 2026-03-26 05:10:00.610841 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-03-26 05:10:00.610853 | orchestrator | Thursday 26 March 2026 05:09:54 +0000 (0:00:01.128) 0:07:18.497 ******** 2026-03-26 05:10:00.610864 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:10:00.610876 | orchestrator | 2026-03-26 05:10:00.610887 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-03-26 05:10:00.610897 | orchestrator | Thursday 26 March 2026 05:09:56 +0000 (0:00:01.174) 0:07:19.671 ******** 2026-03-26 05:10:00.610907 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:10:00.610918 | orchestrator | 2026-03-26 05:10:00.610928 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-03-26 05:10:00.610940 | orchestrator | Thursday 26 March 2026 05:09:57 +0000 (0:00:01.108) 0:07:20.779 ******** 2026-03-26 05:10:00.610950 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:10:00.610961 | orchestrator | 2026-03-26 05:10:00.610972 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-03-26 05:10:00.610984 | orchestrator | Thursday 26 March 2026 05:09:58 +0000 (0:00:01.130) 0:07:21.909 ******** 2026-03-26 05:10:00.610995 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:10:00.611006 | orchestrator | 2026-03-26 05:10:00.611016 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-03-26 05:10:00.611028 | orchestrator | Thursday 26 March 2026 05:09:59 +0000 (0:00:01.129) 0:07:23.038 ******** 2026-03-26 05:10:00.611039 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:10:00.611051 | orchestrator | 2026-03-26 05:10:00.611063 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-03-26 05:10:00.611074 | orchestrator | Thursday 26 March 2026 05:10:00 +0000 (0:00:01.211) 0:07:24.250 ******** 2026-03-26 05:10:52.425036 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:10:52.425187 | orchestrator | 2026-03-26 05:10:52.425215 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-03-26 05:10:52.425233 | orchestrator | Thursday 26 March 2026 05:10:01 +0000 (0:00:01.136) 0:07:25.386 ******** 2026-03-26 05:10:52.425250 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:10:52.425266 | orchestrator | 2026-03-26 05:10:52.425285 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-26 05:10:52.425301 | orchestrator | Thursday 26 March 2026 05:10:02 +0000 (0:00:01.153) 0:07:26.540 ******** 2026-03-26 05:10:52.425317 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:10:52.425329 | orchestrator | 2026-03-26 05:10:52.425339 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-26 05:10:52.425349 | orchestrator | Thursday 26 March 2026 05:10:04 +0000 (0:00:02.009) 0:07:28.549 ******** 2026-03-26 05:10:52.425358 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:10:52.425368 | orchestrator | 2026-03-26 05:10:52.425377 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-26 05:10:52.425387 | orchestrator | Thursday 26 March 2026 05:10:07 +0000 (0:00:02.412) 0:07:30.962 ******** 2026-03-26 05:10:52.425397 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0 2026-03-26 05:10:52.425407 | orchestrator | 2026-03-26 05:10:52.425417 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-26 05:10:52.425426 | orchestrator | Thursday 26 March 2026 05:10:08 +0000 (0:00:01.491) 0:07:32.454 ******** 2026-03-26 05:10:52.425438 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:10:52.425454 | orchestrator | 2026-03-26 05:10:52.425505 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-26 05:10:52.425527 | orchestrator | Thursday 26 March 2026 05:10:09 +0000 (0:00:01.135) 0:07:33.590 ******** 2026-03-26 05:10:52.425543 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:10:52.425558 | orchestrator | 2026-03-26 05:10:52.425574 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-26 05:10:52.425590 | orchestrator | Thursday 26 March 2026 05:10:11 +0000 (0:00:01.150) 0:07:34.740 ******** 2026-03-26 05:10:52.425606 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-26 05:10:52.425624 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-26 05:10:52.425643 | orchestrator | 2026-03-26 05:10:52.425661 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-26 05:10:52.425677 | orchestrator | Thursday 26 March 2026 05:10:13 +0000 (0:00:02.012) 0:07:36.752 ******** 2026-03-26 05:10:52.425695 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:10:52.425712 | orchestrator | 2026-03-26 05:10:52.425727 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-26 05:10:52.425739 | orchestrator | Thursday 26 March 2026 05:10:14 +0000 (0:00:01.644) 0:07:38.396 ******** 2026-03-26 05:10:52.425750 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:10:52.425761 | orchestrator | 2026-03-26 05:10:52.425788 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-26 05:10:52.425800 | orchestrator | Thursday 26 March 2026 05:10:15 +0000 (0:00:01.178) 0:07:39.575 ******** 2026-03-26 05:10:52.425811 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:10:52.425821 | orchestrator | 2026-03-26 05:10:52.425832 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-26 05:10:52.425874 | orchestrator | Thursday 26 March 2026 05:10:17 +0000 (0:00:01.122) 0:07:40.698 ******** 2026-03-26 05:10:52.425884 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:10:52.425894 | orchestrator | 2026-03-26 05:10:52.425903 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-26 05:10:52.425913 | orchestrator | Thursday 26 March 2026 05:10:18 +0000 (0:00:01.105) 0:07:41.803 ******** 2026-03-26 05:10:52.425922 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0 2026-03-26 05:10:52.425932 | orchestrator | 2026-03-26 05:10:52.425941 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-26 05:10:52.425950 | orchestrator | Thursday 26 March 2026 05:10:19 +0000 (0:00:01.442) 0:07:43.245 ******** 2026-03-26 05:10:52.425960 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:10:52.425969 | orchestrator | 2026-03-26 05:10:52.425978 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-26 05:10:52.425988 | orchestrator | Thursday 26 March 2026 05:10:21 +0000 (0:00:01.919) 0:07:45.165 ******** 2026-03-26 05:10:52.425997 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-26 05:10:52.426006 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-26 05:10:52.426078 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-26 05:10:52.426091 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:10:52.426100 | orchestrator | 2026-03-26 05:10:52.426110 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-26 05:10:52.426119 | orchestrator | Thursday 26 March 2026 05:10:22 +0000 (0:00:01.207) 0:07:46.373 ******** 2026-03-26 05:10:52.426128 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:10:52.426138 | orchestrator | 2026-03-26 05:10:52.426147 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-26 05:10:52.426156 | orchestrator | Thursday 26 March 2026 05:10:23 +0000 (0:00:01.133) 0:07:47.506 ******** 2026-03-26 05:10:52.426166 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:10:52.426175 | orchestrator | 2026-03-26 05:10:52.426184 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-26 05:10:52.426218 | orchestrator | Thursday 26 March 2026 05:10:25 +0000 (0:00:01.168) 0:07:48.674 ******** 2026-03-26 05:10:52.426238 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:10:52.426248 | orchestrator | 2026-03-26 05:10:52.426257 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-26 05:10:52.426286 | orchestrator | Thursday 26 March 2026 05:10:26 +0000 (0:00:01.181) 0:07:49.856 ******** 2026-03-26 05:10:52.426297 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:10:52.426306 | orchestrator | 2026-03-26 05:10:52.426316 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-26 05:10:52.426325 | orchestrator | Thursday 26 March 2026 05:10:27 +0000 (0:00:01.145) 0:07:51.001 ******** 2026-03-26 05:10:52.426335 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:10:52.426344 | orchestrator | 2026-03-26 05:10:52.426354 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-26 05:10:52.426364 | orchestrator | Thursday 26 March 2026 05:10:28 +0000 (0:00:01.146) 0:07:52.148 ******** 2026-03-26 05:10:52.426373 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:10:52.426383 | orchestrator | 2026-03-26 05:10:52.426393 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-26 05:10:52.426402 | orchestrator | Thursday 26 March 2026 05:10:31 +0000 (0:00:02.586) 0:07:54.734 ******** 2026-03-26 05:10:52.426412 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:10:52.426421 | orchestrator | 2026-03-26 05:10:52.426431 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-26 05:10:52.426440 | orchestrator | Thursday 26 March 2026 05:10:32 +0000 (0:00:01.153) 0:07:55.888 ******** 2026-03-26 05:10:52.426449 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0 2026-03-26 05:10:52.426459 | orchestrator | 2026-03-26 05:10:52.426468 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-26 05:10:52.426478 | orchestrator | Thursday 26 March 2026 05:10:33 +0000 (0:00:01.486) 0:07:57.375 ******** 2026-03-26 05:10:52.426487 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:10:52.426497 | orchestrator | 2026-03-26 05:10:52.426506 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-26 05:10:52.426515 | orchestrator | Thursday 26 March 2026 05:10:34 +0000 (0:00:01.192) 0:07:58.568 ******** 2026-03-26 05:10:52.426525 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:10:52.426534 | orchestrator | 2026-03-26 05:10:52.426544 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-26 05:10:52.426553 | orchestrator | Thursday 26 March 2026 05:10:36 +0000 (0:00:01.141) 0:07:59.710 ******** 2026-03-26 05:10:52.426563 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:10:52.426572 | orchestrator | 2026-03-26 05:10:52.426582 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-26 05:10:52.426591 | orchestrator | Thursday 26 March 2026 05:10:37 +0000 (0:00:01.119) 0:08:00.829 ******** 2026-03-26 05:10:52.426600 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:10:52.426610 | orchestrator | 2026-03-26 05:10:52.426619 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-26 05:10:52.426629 | orchestrator | Thursday 26 March 2026 05:10:38 +0000 (0:00:01.121) 0:08:01.951 ******** 2026-03-26 05:10:52.426638 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:10:52.426647 | orchestrator | 2026-03-26 05:10:52.426657 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-26 05:10:52.426673 | orchestrator | Thursday 26 March 2026 05:10:39 +0000 (0:00:01.190) 0:08:03.142 ******** 2026-03-26 05:10:52.426683 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:10:52.426692 | orchestrator | 2026-03-26 05:10:52.426702 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-26 05:10:52.426711 | orchestrator | Thursday 26 March 2026 05:10:40 +0000 (0:00:01.162) 0:08:04.304 ******** 2026-03-26 05:10:52.426721 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:10:52.426730 | orchestrator | 2026-03-26 05:10:52.426747 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-26 05:10:52.426757 | orchestrator | Thursday 26 March 2026 05:10:41 +0000 (0:00:01.117) 0:08:05.422 ******** 2026-03-26 05:10:52.426767 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:10:52.426776 | orchestrator | 2026-03-26 05:10:52.426785 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-26 05:10:52.426795 | orchestrator | Thursday 26 March 2026 05:10:42 +0000 (0:00:01.169) 0:08:06.591 ******** 2026-03-26 05:10:52.426804 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:10:52.426814 | orchestrator | 2026-03-26 05:10:52.426823 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-26 05:10:52.426832 | orchestrator | Thursday 26 March 2026 05:10:44 +0000 (0:00:01.150) 0:08:07.741 ******** 2026-03-26 05:10:52.426867 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0 2026-03-26 05:10:52.426877 | orchestrator | 2026-03-26 05:10:52.426887 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-26 05:10:52.426896 | orchestrator | Thursday 26 March 2026 05:10:45 +0000 (0:00:01.555) 0:08:09.297 ******** 2026-03-26 05:10:52.426906 | orchestrator | ok: [testbed-node-0] => (item=/etc/ceph) 2026-03-26 05:10:52.426915 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/) 2026-03-26 05:10:52.426925 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-03-26 05:10:52.426934 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-03-26 05:10:52.426944 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-03-26 05:10:52.426953 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-03-26 05:10:52.426962 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-03-26 05:10:52.426972 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-03-26 05:10:52.426981 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-26 05:10:52.426995 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-26 05:10:52.427011 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-26 05:10:52.427026 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-26 05:10:52.427040 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-26 05:10:52.427056 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-26 05:10:52.427080 | orchestrator | ok: [testbed-node-0] => (item=/var/run/ceph) 2026-03-26 05:11:40.616307 | orchestrator | ok: [testbed-node-0] => (item=/var/log/ceph) 2026-03-26 05:11:40.616404 | orchestrator | 2026-03-26 05:11:40.616416 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-26 05:11:40.616426 | orchestrator | Thursday 26 March 2026 05:10:52 +0000 (0:00:06.764) 0:08:16.062 ******** 2026-03-26 05:11:40.616435 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:11:40.616444 | orchestrator | 2026-03-26 05:11:40.616452 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-26 05:11:40.616461 | orchestrator | Thursday 26 March 2026 05:10:53 +0000 (0:00:01.113) 0:08:17.175 ******** 2026-03-26 05:11:40.616469 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:11:40.616476 | orchestrator | 2026-03-26 05:11:40.616484 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-26 05:11:40.616492 | orchestrator | Thursday 26 March 2026 05:10:54 +0000 (0:00:01.125) 0:08:18.301 ******** 2026-03-26 05:11:40.616500 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:11:40.616508 | orchestrator | 2026-03-26 05:11:40.616515 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-26 05:11:40.616523 | orchestrator | Thursday 26 March 2026 05:10:55 +0000 (0:00:01.132) 0:08:19.434 ******** 2026-03-26 05:11:40.616531 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:11:40.616539 | orchestrator | 2026-03-26 05:11:40.616547 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-26 05:11:40.616575 | orchestrator | Thursday 26 March 2026 05:10:56 +0000 (0:00:01.146) 0:08:20.581 ******** 2026-03-26 05:11:40.616583 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:11:40.616591 | orchestrator | 2026-03-26 05:11:40.616599 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-26 05:11:40.616607 | orchestrator | Thursday 26 March 2026 05:10:58 +0000 (0:00:01.110) 0:08:21.691 ******** 2026-03-26 05:11:40.616615 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:11:40.616623 | orchestrator | 2026-03-26 05:11:40.616631 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-26 05:11:40.616640 | orchestrator | Thursday 26 March 2026 05:10:59 +0000 (0:00:01.194) 0:08:22.885 ******** 2026-03-26 05:11:40.616647 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:11:40.616655 | orchestrator | 2026-03-26 05:11:40.616663 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-26 05:11:40.616671 | orchestrator | Thursday 26 March 2026 05:11:00 +0000 (0:00:01.126) 0:08:24.012 ******** 2026-03-26 05:11:40.616679 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:11:40.616686 | orchestrator | 2026-03-26 05:11:40.616694 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-26 05:11:40.616702 | orchestrator | Thursday 26 March 2026 05:11:01 +0000 (0:00:01.116) 0:08:25.128 ******** 2026-03-26 05:11:40.616710 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:11:40.616718 | orchestrator | 2026-03-26 05:11:40.616738 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-26 05:11:40.616746 | orchestrator | Thursday 26 March 2026 05:11:02 +0000 (0:00:01.154) 0:08:26.283 ******** 2026-03-26 05:11:40.616754 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:11:40.616761 | orchestrator | 2026-03-26 05:11:40.616769 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-26 05:11:40.616777 | orchestrator | Thursday 26 March 2026 05:11:03 +0000 (0:00:01.115) 0:08:27.399 ******** 2026-03-26 05:11:40.616785 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:11:40.616793 | orchestrator | 2026-03-26 05:11:40.616800 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-26 05:11:40.616808 | orchestrator | Thursday 26 March 2026 05:11:04 +0000 (0:00:01.158) 0:08:28.558 ******** 2026-03-26 05:11:40.616816 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:11:40.616824 | orchestrator | 2026-03-26 05:11:40.616832 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-26 05:11:40.616840 | orchestrator | Thursday 26 March 2026 05:11:06 +0000 (0:00:01.164) 0:08:29.722 ******** 2026-03-26 05:11:40.616847 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:11:40.616855 | orchestrator | 2026-03-26 05:11:40.616863 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-26 05:11:40.616872 | orchestrator | Thursday 26 March 2026 05:11:07 +0000 (0:00:01.257) 0:08:30.980 ******** 2026-03-26 05:11:40.616881 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:11:40.616889 | orchestrator | 2026-03-26 05:11:40.616918 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-26 05:11:40.616927 | orchestrator | Thursday 26 March 2026 05:11:08 +0000 (0:00:01.165) 0:08:32.147 ******** 2026-03-26 05:11:40.616936 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:11:40.616945 | orchestrator | 2026-03-26 05:11:40.616953 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-26 05:11:40.616962 | orchestrator | Thursday 26 March 2026 05:11:09 +0000 (0:00:01.213) 0:08:33.360 ******** 2026-03-26 05:11:40.616971 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:11:40.616980 | orchestrator | 2026-03-26 05:11:40.616989 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-26 05:11:40.616997 | orchestrator | Thursday 26 March 2026 05:11:10 +0000 (0:00:01.193) 0:08:34.554 ******** 2026-03-26 05:11:40.617006 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:11:40.617022 | orchestrator | 2026-03-26 05:11:40.617031 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-26 05:11:40.617041 | orchestrator | Thursday 26 March 2026 05:11:12 +0000 (0:00:01.190) 0:08:35.744 ******** 2026-03-26 05:11:40.617051 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:11:40.617060 | orchestrator | 2026-03-26 05:11:40.617069 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-26 05:11:40.617077 | orchestrator | Thursday 26 March 2026 05:11:13 +0000 (0:00:01.119) 0:08:36.864 ******** 2026-03-26 05:11:40.617085 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:11:40.617092 | orchestrator | 2026-03-26 05:11:40.617114 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-26 05:11:40.617122 | orchestrator | Thursday 26 March 2026 05:11:14 +0000 (0:00:01.194) 0:08:38.059 ******** 2026-03-26 05:11:40.617130 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:11:40.617138 | orchestrator | 2026-03-26 05:11:40.617146 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-26 05:11:40.617153 | orchestrator | Thursday 26 March 2026 05:11:15 +0000 (0:00:01.131) 0:08:39.190 ******** 2026-03-26 05:11:40.617161 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:11:40.617169 | orchestrator | 2026-03-26 05:11:40.617177 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-26 05:11:40.617185 | orchestrator | Thursday 26 March 2026 05:11:16 +0000 (0:00:01.137) 0:08:40.328 ******** 2026-03-26 05:11:40.617193 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-03-26 05:11:40.617201 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-03-26 05:11:40.617208 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-03-26 05:11:40.617216 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:11:40.617224 | orchestrator | 2026-03-26 05:11:40.617232 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-26 05:11:40.617239 | orchestrator | Thursday 26 March 2026 05:11:18 +0000 (0:00:01.768) 0:08:42.096 ******** 2026-03-26 05:11:40.617247 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-03-26 05:11:40.617255 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-03-26 05:11:40.617263 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-03-26 05:11:40.617270 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:11:40.617278 | orchestrator | 2026-03-26 05:11:40.617286 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-26 05:11:40.617294 | orchestrator | Thursday 26 March 2026 05:11:19 +0000 (0:00:01.440) 0:08:43.537 ******** 2026-03-26 05:11:40.617301 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-03-26 05:11:40.617309 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-03-26 05:11:40.617317 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-03-26 05:11:40.617324 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:11:40.617332 | orchestrator | 2026-03-26 05:11:40.617340 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-26 05:11:40.617348 | orchestrator | Thursday 26 March 2026 05:11:21 +0000 (0:00:01.477) 0:08:45.015 ******** 2026-03-26 05:11:40.617355 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:11:40.617363 | orchestrator | 2026-03-26 05:11:40.617371 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-26 05:11:40.617379 | orchestrator | Thursday 26 March 2026 05:11:22 +0000 (0:00:01.150) 0:08:46.166 ******** 2026-03-26 05:11:40.617386 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-03-26 05:11:40.617394 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:11:40.617402 | orchestrator | 2026-03-26 05:11:40.617414 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-26 05:11:40.617422 | orchestrator | Thursday 26 March 2026 05:11:23 +0000 (0:00:01.405) 0:08:47.571 ******** 2026-03-26 05:11:40.617435 | orchestrator | changed: [testbed-node-0] 2026-03-26 05:11:40.617442 | orchestrator | 2026-03-26 05:11:40.617450 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-03-26 05:11:40.617458 | orchestrator | Thursday 26 March 2026 05:11:25 +0000 (0:00:01.819) 0:08:49.391 ******** 2026-03-26 05:11:40.617466 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:11:40.617474 | orchestrator | 2026-03-26 05:11:40.617481 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-03-26 05:11:40.617489 | orchestrator | Thursday 26 March 2026 05:11:26 +0000 (0:00:01.148) 0:08:50.540 ******** 2026-03-26 05:11:40.617497 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0 2026-03-26 05:11:40.617506 | orchestrator | 2026-03-26 05:11:40.617514 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-03-26 05:11:40.617521 | orchestrator | Thursday 26 March 2026 05:11:28 +0000 (0:00:01.527) 0:08:52.068 ******** 2026-03-26 05:11:40.617529 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] 2026-03-26 05:11:40.617537 | orchestrator | 2026-03-26 05:11:40.617545 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-03-26 05:11:40.617552 | orchestrator | Thursday 26 March 2026 05:11:31 +0000 (0:00:03.507) 0:08:55.575 ******** 2026-03-26 05:11:40.617560 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:11:40.617568 | orchestrator | 2026-03-26 05:11:40.617576 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-03-26 05:11:40.617584 | orchestrator | Thursday 26 March 2026 05:11:33 +0000 (0:00:01.206) 0:08:56.781 ******** 2026-03-26 05:11:40.617591 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:11:40.617599 | orchestrator | 2026-03-26 05:11:40.617607 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-03-26 05:11:40.617615 | orchestrator | Thursday 26 March 2026 05:11:34 +0000 (0:00:01.146) 0:08:57.928 ******** 2026-03-26 05:11:40.617622 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:11:40.617630 | orchestrator | 2026-03-26 05:11:40.617638 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-03-26 05:11:40.617646 | orchestrator | Thursday 26 March 2026 05:11:35 +0000 (0:00:01.166) 0:08:59.095 ******** 2026-03-26 05:11:40.617653 | orchestrator | changed: [testbed-node-0] 2026-03-26 05:11:40.617661 | orchestrator | 2026-03-26 05:11:40.617669 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-03-26 05:11:40.617677 | orchestrator | Thursday 26 March 2026 05:11:37 +0000 (0:00:02.085) 0:09:01.180 ******** 2026-03-26 05:11:40.617684 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:11:40.617692 | orchestrator | 2026-03-26 05:11:40.617700 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-03-26 05:11:40.617708 | orchestrator | Thursday 26 March 2026 05:11:39 +0000 (0:00:01.613) 0:09:02.794 ******** 2026-03-26 05:11:40.617715 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:11:40.617723 | orchestrator | 2026-03-26 05:11:40.617734 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-03-26 05:12:37.610443 | orchestrator | Thursday 26 March 2026 05:11:40 +0000 (0:00:01.465) 0:09:04.260 ******** 2026-03-26 05:12:37.610552 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:12:37.610568 | orchestrator | 2026-03-26 05:12:37.610581 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-03-26 05:12:37.610592 | orchestrator | Thursday 26 March 2026 05:11:42 +0000 (0:00:01.542) 0:09:05.802 ******** 2026-03-26 05:12:37.610603 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:12:37.610614 | orchestrator | 2026-03-26 05:12:37.610626 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-03-26 05:12:37.610637 | orchestrator | Thursday 26 March 2026 05:11:43 +0000 (0:00:01.688) 0:09:07.491 ******** 2026-03-26 05:12:37.610648 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:12:37.610659 | orchestrator | 2026-03-26 05:12:37.610670 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-03-26 05:12:37.610681 | orchestrator | Thursday 26 March 2026 05:11:45 +0000 (0:00:01.696) 0:09:09.188 ******** 2026-03-26 05:12:37.610714 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-03-26 05:12:37.610727 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-26 05:12:37.610738 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-26 05:12:37.610749 | orchestrator | ok: [testbed-node-0 -> {{ item }}] 2026-03-26 05:12:37.610760 | orchestrator | 2026-03-26 05:12:37.610771 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-03-26 05:12:37.610781 | orchestrator | Thursday 26 March 2026 05:11:49 +0000 (0:00:03.888) 0:09:13.076 ******** 2026-03-26 05:12:37.610792 | orchestrator | changed: [testbed-node-0] 2026-03-26 05:12:37.610803 | orchestrator | 2026-03-26 05:12:37.610813 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-03-26 05:12:37.610824 | orchestrator | Thursday 26 March 2026 05:11:51 +0000 (0:00:02.132) 0:09:15.209 ******** 2026-03-26 05:12:37.610835 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:12:37.610845 | orchestrator | 2026-03-26 05:12:37.610856 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-03-26 05:12:37.610868 | orchestrator | Thursday 26 March 2026 05:11:52 +0000 (0:00:01.155) 0:09:16.365 ******** 2026-03-26 05:12:37.610879 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:12:37.610889 | orchestrator | 2026-03-26 05:12:37.610900 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-03-26 05:12:37.610911 | orchestrator | Thursday 26 March 2026 05:11:53 +0000 (0:00:01.179) 0:09:17.544 ******** 2026-03-26 05:12:37.610922 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:12:37.610932 | orchestrator | 2026-03-26 05:12:37.610943 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-03-26 05:12:37.610954 | orchestrator | Thursday 26 March 2026 05:11:55 +0000 (0:00:02.047) 0:09:19.591 ******** 2026-03-26 05:12:37.610996 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:12:37.611017 | orchestrator | 2026-03-26 05:12:37.611038 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-03-26 05:12:37.611070 | orchestrator | Thursday 26 March 2026 05:11:57 +0000 (0:00:01.416) 0:09:21.008 ******** 2026-03-26 05:12:37.611090 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:12:37.611103 | orchestrator | 2026-03-26 05:12:37.611116 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-03-26 05:12:37.611128 | orchestrator | Thursday 26 March 2026 05:11:58 +0000 (0:00:01.144) 0:09:22.152 ******** 2026-03-26 05:12:37.611140 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0 2026-03-26 05:12:37.611153 | orchestrator | 2026-03-26 05:12:37.611165 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-03-26 05:12:37.611178 | orchestrator | Thursday 26 March 2026 05:11:59 +0000 (0:00:01.472) 0:09:23.624 ******** 2026-03-26 05:12:37.611189 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:12:37.611202 | orchestrator | 2026-03-26 05:12:37.611214 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-03-26 05:12:37.611226 | orchestrator | Thursday 26 March 2026 05:12:01 +0000 (0:00:01.125) 0:09:24.750 ******** 2026-03-26 05:12:37.611237 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:12:37.611249 | orchestrator | 2026-03-26 05:12:37.611262 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-03-26 05:12:37.611274 | orchestrator | Thursday 26 March 2026 05:12:02 +0000 (0:00:01.105) 0:09:25.855 ******** 2026-03-26 05:12:37.611286 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0 2026-03-26 05:12:37.611298 | orchestrator | 2026-03-26 05:12:37.611310 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-03-26 05:12:37.611322 | orchestrator | Thursday 26 March 2026 05:12:03 +0000 (0:00:01.470) 0:09:27.326 ******** 2026-03-26 05:12:37.611334 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:12:37.611347 | orchestrator | 2026-03-26 05:12:37.611359 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-03-26 05:12:37.611380 | orchestrator | Thursday 26 March 2026 05:12:05 +0000 (0:00:02.330) 0:09:29.657 ******** 2026-03-26 05:12:37.611393 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:12:37.611403 | orchestrator | 2026-03-26 05:12:37.611414 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-03-26 05:12:37.611425 | orchestrator | Thursday 26 March 2026 05:12:08 +0000 (0:00:02.018) 0:09:31.675 ******** 2026-03-26 05:12:37.611435 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:12:37.611446 | orchestrator | 2026-03-26 05:12:37.611457 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-03-26 05:12:37.611467 | orchestrator | Thursday 26 March 2026 05:12:10 +0000 (0:00:02.419) 0:09:34.095 ******** 2026-03-26 05:12:37.611478 | orchestrator | changed: [testbed-node-0] 2026-03-26 05:12:37.611489 | orchestrator | 2026-03-26 05:12:37.611499 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-03-26 05:12:37.611510 | orchestrator | Thursday 26 March 2026 05:12:13 +0000 (0:00:03.290) 0:09:37.386 ******** 2026-03-26 05:12:37.611521 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0 2026-03-26 05:12:37.611531 | orchestrator | 2026-03-26 05:12:37.611560 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-03-26 05:12:37.611572 | orchestrator | Thursday 26 March 2026 05:12:15 +0000 (0:00:01.623) 0:09:39.010 ******** 2026-03-26 05:12:37.611583 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:12:37.611594 | orchestrator | 2026-03-26 05:12:37.611604 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-03-26 05:12:37.611615 | orchestrator | Thursday 26 March 2026 05:12:17 +0000 (0:00:02.355) 0:09:41.365 ******** 2026-03-26 05:12:37.611626 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:12:37.611636 | orchestrator | 2026-03-26 05:12:37.611647 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-03-26 05:12:37.611658 | orchestrator | Thursday 26 March 2026 05:12:20 +0000 (0:00:02.967) 0:09:44.332 ******** 2026-03-26 05:12:37.611668 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:12:37.611679 | orchestrator | 2026-03-26 05:12:37.611690 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-03-26 05:12:37.611700 | orchestrator | Thursday 26 March 2026 05:12:21 +0000 (0:00:01.131) 0:09:45.464 ******** 2026-03-26 05:12:37.611714 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__77be735a24b84ab013e2e181233ee69b9d9f8b69'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-03-26 05:12:37.611728 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__77be735a24b84ab013e2e181233ee69b9d9f8b69'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-03-26 05:12:37.611739 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__77be735a24b84ab013e2e181233ee69b9d9f8b69'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-03-26 05:12:37.611756 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__77be735a24b84ab013e2e181233ee69b9d9f8b69'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-03-26 05:12:37.611769 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__77be735a24b84ab013e2e181233ee69b9d9f8b69'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-03-26 05:12:37.611788 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__77be735a24b84ab013e2e181233ee69b9d9f8b69'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__77be735a24b84ab013e2e181233ee69b9d9f8b69'}])  2026-03-26 05:12:37.611800 | orchestrator | 2026-03-26 05:12:37.611811 | orchestrator | TASK [Start ceph mgr] ********************************************************** 2026-03-26 05:12:37.611822 | orchestrator | Thursday 26 March 2026 05:12:31 +0000 (0:00:09.703) 0:09:55.167 ******** 2026-03-26 05:12:37.611833 | orchestrator | changed: [testbed-node-0] 2026-03-26 05:12:37.611844 | orchestrator | 2026-03-26 05:12:37.611855 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-26 05:12:37.611866 | orchestrator | Thursday 26 March 2026 05:12:33 +0000 (0:00:02.422) 0:09:57.590 ******** 2026-03-26 05:12:37.611877 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-26 05:12:37.611888 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-26 05:12:37.611898 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-26 05:12:37.611909 | orchestrator | 2026-03-26 05:12:37.611920 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-26 05:12:37.611930 | orchestrator | Thursday 26 March 2026 05:12:36 +0000 (0:00:02.229) 0:09:59.820 ******** 2026-03-26 05:12:37.611941 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-26 05:12:37.611953 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-26 05:12:37.612025 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-26 05:12:37.612041 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:12:37.612052 | orchestrator | 2026-03-26 05:12:37.612063 | orchestrator | TASK [Non container | waiting for the monitor to join the quorum...] *********** 2026-03-26 05:12:37.612081 | orchestrator | Thursday 26 March 2026 05:12:37 +0000 (0:00:01.430) 0:10:01.251 ******** 2026-03-26 05:13:14.487742 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:13:14.487864 | orchestrator | 2026-03-26 05:13:14.487880 | orchestrator | TASK [Container | waiting for the containerized monitor to join the quorum...] *** 2026-03-26 05:13:14.487894 | orchestrator | Thursday 26 March 2026 05:12:38 +0000 (0:00:01.139) 0:10:02.391 ******** 2026-03-26 05:13:14.487906 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:13:14.487918 | orchestrator | 2026-03-26 05:13:14.487929 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-26 05:13:14.487940 | orchestrator | Thursday 26 March 2026 05:12:40 +0000 (0:00:02.224) 0:10:04.615 ******** 2026-03-26 05:13:14.487951 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:13:14.487962 | orchestrator | 2026-03-26 05:13:14.487973 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-03-26 05:13:14.487984 | orchestrator | Thursday 26 March 2026 05:12:42 +0000 (0:00:01.124) 0:10:05.740 ******** 2026-03-26 05:13:14.487994 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:13:14.488058 | orchestrator | 2026-03-26 05:13:14.488071 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-03-26 05:13:14.488082 | orchestrator | Thursday 26 March 2026 05:12:43 +0000 (0:00:01.100) 0:10:06.841 ******** 2026-03-26 05:13:14.488093 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:13:14.488104 | orchestrator | 2026-03-26 05:13:14.488114 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-03-26 05:13:14.488125 | orchestrator | Thursday 26 March 2026 05:12:44 +0000 (0:00:01.117) 0:10:07.958 ******** 2026-03-26 05:13:14.488136 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:13:14.488148 | orchestrator | 2026-03-26 05:13:14.488200 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-03-26 05:13:14.488222 | orchestrator | Thursday 26 March 2026 05:12:45 +0000 (0:00:01.131) 0:10:09.089 ******** 2026-03-26 05:13:14.488241 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:13:14.488261 | orchestrator | 2026-03-26 05:13:14.488281 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-03-26 05:13:14.488301 | orchestrator | Thursday 26 March 2026 05:12:46 +0000 (0:00:01.091) 0:10:10.181 ******** 2026-03-26 05:13:14.488319 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:13:14.488340 | orchestrator | 2026-03-26 05:13:14.488360 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-03-26 05:13:14.488382 | orchestrator | Thursday 26 March 2026 05:12:47 +0000 (0:00:01.121) 0:10:11.303 ******** 2026-03-26 05:13:14.488403 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:13:14.488423 | orchestrator | 2026-03-26 05:13:14.488443 | orchestrator | PLAY [Upgrade ceph mon cluster] ************************************************ 2026-03-26 05:13:14.488464 | orchestrator | 2026-03-26 05:13:14.488484 | orchestrator | TASK [Remove ceph aliases] ***************************************************** 2026-03-26 05:13:14.488505 | orchestrator | Thursday 26 March 2026 05:12:48 +0000 (0:00:00.939) 0:10:12.242 ******** 2026-03-26 05:13:14.488526 | orchestrator | ok: [testbed-node-1] 2026-03-26 05:13:14.488548 | orchestrator | 2026-03-26 05:13:14.488567 | orchestrator | TASK [Set mon_host_count] ****************************************************** 2026-03-26 05:13:14.488597 | orchestrator | Thursday 26 March 2026 05:12:49 +0000 (0:00:01.134) 0:10:13.377 ******** 2026-03-26 05:13:14.488610 | orchestrator | ok: [testbed-node-1] 2026-03-26 05:13:14.488620 | orchestrator | 2026-03-26 05:13:14.488631 | orchestrator | TASK [Fail when less than three monitors] ************************************** 2026-03-26 05:13:14.488641 | orchestrator | Thursday 26 March 2026 05:12:50 +0000 (0:00:00.796) 0:10:14.173 ******** 2026-03-26 05:13:14.488652 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:13:14.488663 | orchestrator | 2026-03-26 05:13:14.488673 | orchestrator | TASK [Select a running monitor] ************************************************ 2026-03-26 05:13:14.488684 | orchestrator | Thursday 26 March 2026 05:12:51 +0000 (0:00:00.751) 0:10:14.925 ******** 2026-03-26 05:13:14.488695 | orchestrator | ok: [testbed-node-1] 2026-03-26 05:13:14.488705 | orchestrator | 2026-03-26 05:13:14.488716 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-26 05:13:14.488726 | orchestrator | Thursday 26 March 2026 05:12:52 +0000 (0:00:00.774) 0:10:15.699 ******** 2026-03-26 05:13:14.488737 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-1 2026-03-26 05:13:14.488747 | orchestrator | 2026-03-26 05:13:14.488758 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-26 05:13:14.488769 | orchestrator | Thursday 26 March 2026 05:12:53 +0000 (0:00:01.197) 0:10:16.897 ******** 2026-03-26 05:13:14.488779 | orchestrator | ok: [testbed-node-1] 2026-03-26 05:13:14.488790 | orchestrator | 2026-03-26 05:13:14.488800 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-26 05:13:14.488812 | orchestrator | Thursday 26 March 2026 05:12:54 +0000 (0:00:01.482) 0:10:18.379 ******** 2026-03-26 05:13:14.488830 | orchestrator | ok: [testbed-node-1] 2026-03-26 05:13:14.488848 | orchestrator | 2026-03-26 05:13:14.488866 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-26 05:13:14.488884 | orchestrator | Thursday 26 March 2026 05:12:55 +0000 (0:00:01.155) 0:10:19.535 ******** 2026-03-26 05:13:14.488901 | orchestrator | ok: [testbed-node-1] 2026-03-26 05:13:14.488920 | orchestrator | 2026-03-26 05:13:14.488939 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-26 05:13:14.488958 | orchestrator | Thursday 26 March 2026 05:12:57 +0000 (0:00:01.444) 0:10:20.979 ******** 2026-03-26 05:13:14.488976 | orchestrator | ok: [testbed-node-1] 2026-03-26 05:13:14.488992 | orchestrator | 2026-03-26 05:13:14.489029 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-26 05:13:14.489042 | orchestrator | Thursday 26 March 2026 05:12:58 +0000 (0:00:01.141) 0:10:22.120 ******** 2026-03-26 05:13:14.489065 | orchestrator | ok: [testbed-node-1] 2026-03-26 05:13:14.489076 | orchestrator | 2026-03-26 05:13:14.489086 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-26 05:13:14.489097 | orchestrator | Thursday 26 March 2026 05:12:59 +0000 (0:00:01.146) 0:10:23.267 ******** 2026-03-26 05:13:14.489108 | orchestrator | ok: [testbed-node-1] 2026-03-26 05:13:14.489119 | orchestrator | 2026-03-26 05:13:14.489129 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-26 05:13:14.489140 | orchestrator | Thursday 26 March 2026 05:13:00 +0000 (0:00:01.138) 0:10:24.406 ******** 2026-03-26 05:13:14.489171 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:13:14.489183 | orchestrator | 2026-03-26 05:13:14.489194 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-26 05:13:14.489205 | orchestrator | Thursday 26 March 2026 05:13:01 +0000 (0:00:01.137) 0:10:25.544 ******** 2026-03-26 05:13:14.489215 | orchestrator | ok: [testbed-node-1] 2026-03-26 05:13:14.489226 | orchestrator | 2026-03-26 05:13:14.489237 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-26 05:13:14.489247 | orchestrator | Thursday 26 March 2026 05:13:03 +0000 (0:00:01.225) 0:10:26.769 ******** 2026-03-26 05:13:14.489258 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-26 05:13:14.489269 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-26 05:13:14.489280 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-26 05:13:14.489290 | orchestrator | 2026-03-26 05:13:14.489301 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-26 05:13:14.489311 | orchestrator | Thursday 26 March 2026 05:13:05 +0000 (0:00:01.979) 0:10:28.749 ******** 2026-03-26 05:13:14.489322 | orchestrator | ok: [testbed-node-1] 2026-03-26 05:13:14.489333 | orchestrator | 2026-03-26 05:13:14.489343 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-26 05:13:14.489354 | orchestrator | Thursday 26 March 2026 05:13:06 +0000 (0:00:01.291) 0:10:30.040 ******** 2026-03-26 05:13:14.489365 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-26 05:13:14.489375 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-26 05:13:14.489386 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-26 05:13:14.489397 | orchestrator | 2026-03-26 05:13:14.489408 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-26 05:13:14.489419 | orchestrator | Thursday 26 March 2026 05:13:09 +0000 (0:00:03.220) 0:10:33.261 ******** 2026-03-26 05:13:14.489430 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-26 05:13:14.489447 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-26 05:13:14.489463 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-26 05:13:14.489475 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:13:14.489485 | orchestrator | 2026-03-26 05:13:14.489496 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-26 05:13:14.489507 | orchestrator | Thursday 26 March 2026 05:13:11 +0000 (0:00:01.731) 0:10:34.993 ******** 2026-03-26 05:13:14.489518 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-26 05:13:14.489539 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-26 05:13:14.489551 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-26 05:13:14.489570 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:13:14.489581 | orchestrator | 2026-03-26 05:13:14.489592 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-26 05:13:14.489603 | orchestrator | Thursday 26 March 2026 05:13:13 +0000 (0:00:01.960) 0:10:36.954 ******** 2026-03-26 05:13:14.489616 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-26 05:13:14.489634 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-26 05:13:14.489653 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-26 05:13:14.489673 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:13:14.489691 | orchestrator | 2026-03-26 05:13:14.489721 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-26 05:13:35.229414 | orchestrator | Thursday 26 March 2026 05:13:14 +0000 (0:00:01.173) 0:10:38.127 ******** 2026-03-26 05:13:35.229539 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': 'de9c3b4c4c57', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-26 05:13:06.930490', 'end': '2026-03-26 05:13:06.985292', 'delta': '0:00:00.054802', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['de9c3b4c4c57'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-26 05:13:35.229561 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': '1fb5a820b9f6', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-26 05:13:07.808352', 'end': '2026-03-26 05:13:07.863473', 'delta': '0:00:00.055121', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['1fb5a820b9f6'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-26 05:13:35.229590 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': '2a382ea60872', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-26 05:13:08.378617', 'end': '2026-03-26 05:13:08.427807', 'delta': '0:00:00.049190', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['2a382ea60872'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-26 05:13:35.229625 | orchestrator | 2026-03-26 05:13:35.229639 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-26 05:13:35.229650 | orchestrator | Thursday 26 March 2026 05:13:15 +0000 (0:00:01.248) 0:10:39.375 ******** 2026-03-26 05:13:35.229662 | orchestrator | ok: [testbed-node-1] 2026-03-26 05:13:35.229674 | orchestrator | 2026-03-26 05:13:35.229685 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-26 05:13:35.229697 | orchestrator | Thursday 26 March 2026 05:13:16 +0000 (0:00:01.265) 0:10:40.640 ******** 2026-03-26 05:13:35.229708 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:13:35.229720 | orchestrator | 2026-03-26 05:13:35.229732 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-26 05:13:35.229743 | orchestrator | Thursday 26 March 2026 05:13:18 +0000 (0:00:01.289) 0:10:41.930 ******** 2026-03-26 05:13:35.229754 | orchestrator | ok: [testbed-node-1] 2026-03-26 05:13:35.229765 | orchestrator | 2026-03-26 05:13:35.229776 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-26 05:13:35.229787 | orchestrator | Thursday 26 March 2026 05:13:19 +0000 (0:00:01.171) 0:10:43.101 ******** 2026-03-26 05:13:35.229799 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] 2026-03-26 05:13:35.229810 | orchestrator | 2026-03-26 05:13:35.229821 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-26 05:13:35.229833 | orchestrator | Thursday 26 March 2026 05:13:22 +0000 (0:00:02.994) 0:10:46.095 ******** 2026-03-26 05:13:35.229844 | orchestrator | ok: [testbed-node-1] 2026-03-26 05:13:35.229855 | orchestrator | 2026-03-26 05:13:35.229866 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-26 05:13:35.229877 | orchestrator | Thursday 26 March 2026 05:13:23 +0000 (0:00:01.136) 0:10:47.232 ******** 2026-03-26 05:13:35.229888 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:13:35.229899 | orchestrator | 2026-03-26 05:13:35.229911 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-26 05:13:35.229922 | orchestrator | Thursday 26 March 2026 05:13:24 +0000 (0:00:01.163) 0:10:48.395 ******** 2026-03-26 05:13:35.229933 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:13:35.229945 | orchestrator | 2026-03-26 05:13:35.229958 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-26 05:13:35.229970 | orchestrator | Thursday 26 March 2026 05:13:25 +0000 (0:00:01.199) 0:10:49.595 ******** 2026-03-26 05:13:35.229983 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:13:35.229997 | orchestrator | 2026-03-26 05:13:35.230010 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-26 05:13:35.230275 | orchestrator | Thursday 26 March 2026 05:13:27 +0000 (0:00:01.130) 0:10:50.725 ******** 2026-03-26 05:13:35.230297 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:13:35.230308 | orchestrator | 2026-03-26 05:13:35.230320 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-26 05:13:35.231136 | orchestrator | Thursday 26 March 2026 05:13:28 +0000 (0:00:01.148) 0:10:51.874 ******** 2026-03-26 05:13:35.231155 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:13:35.231168 | orchestrator | 2026-03-26 05:13:35.231181 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-26 05:13:35.231193 | orchestrator | Thursday 26 March 2026 05:13:29 +0000 (0:00:01.191) 0:10:53.065 ******** 2026-03-26 05:13:35.231206 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:13:35.231218 | orchestrator | 2026-03-26 05:13:35.231230 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-26 05:13:35.231242 | orchestrator | Thursday 26 March 2026 05:13:30 +0000 (0:00:01.161) 0:10:54.227 ******** 2026-03-26 05:13:35.231268 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:13:35.231280 | orchestrator | 2026-03-26 05:13:35.231293 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-26 05:13:35.231305 | orchestrator | Thursday 26 March 2026 05:13:31 +0000 (0:00:01.180) 0:10:55.408 ******** 2026-03-26 05:13:35.231317 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:13:35.231329 | orchestrator | 2026-03-26 05:13:35.231342 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-26 05:13:35.231354 | orchestrator | Thursday 26 March 2026 05:13:32 +0000 (0:00:01.116) 0:10:56.525 ******** 2026-03-26 05:13:35.231365 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:13:35.231375 | orchestrator | 2026-03-26 05:13:35.231386 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-26 05:13:35.231396 | orchestrator | Thursday 26 March 2026 05:13:33 +0000 (0:00:01.105) 0:10:57.630 ******** 2026-03-26 05:13:35.231409 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:13:35.231432 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:13:35.231444 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:13:35.231457 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-26-01-38-16-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-26 05:13:35.231471 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:13:35.231482 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:13:35.231505 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:13:36.504915 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2e41bcf9-ad92-42bb-b49e-289ca95def9f', 'scsi-SQEMU_QEMU_HARDDISK_2e41bcf9-ad92-42bb-b49e-289ca95def9f'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2e41bcf9', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2e41bcf9-ad92-42bb-b49e-289ca95def9f-part16', 'scsi-SQEMU_QEMU_HARDDISK_2e41bcf9-ad92-42bb-b49e-289ca95def9f-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2e41bcf9-ad92-42bb-b49e-289ca95def9f-part14', 'scsi-SQEMU_QEMU_HARDDISK_2e41bcf9-ad92-42bb-b49e-289ca95def9f-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2e41bcf9-ad92-42bb-b49e-289ca95def9f-part15', 'scsi-SQEMU_QEMU_HARDDISK_2e41bcf9-ad92-42bb-b49e-289ca95def9f-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2e41bcf9-ad92-42bb-b49e-289ca95def9f-part1', 'scsi-SQEMU_QEMU_HARDDISK_2e41bcf9-ad92-42bb-b49e-289ca95def9f-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-26 05:13:36.505070 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:13:36.505093 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:13:36.505106 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:13:36.505119 | orchestrator | 2026-03-26 05:13:36.505131 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-26 05:13:36.505143 | orchestrator | Thursday 26 March 2026 05:13:35 +0000 (0:00:01.240) 0:10:58.870 ******** 2026-03-26 05:13:36.505157 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:13:36.505212 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:13:36.505225 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:13:36.505238 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-26-01-38-16-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:13:36.505251 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:13:36.505262 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:13:36.505273 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:13:36.505349 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2e41bcf9-ad92-42bb-b49e-289ca95def9f', 'scsi-SQEMU_QEMU_HARDDISK_2e41bcf9-ad92-42bb-b49e-289ca95def9f'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2e41bcf9', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2e41bcf9-ad92-42bb-b49e-289ca95def9f-part16', 'scsi-SQEMU_QEMU_HARDDISK_2e41bcf9-ad92-42bb-b49e-289ca95def9f-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2e41bcf9-ad92-42bb-b49e-289ca95def9f-part14', 'scsi-SQEMU_QEMU_HARDDISK_2e41bcf9-ad92-42bb-b49e-289ca95def9f-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2e41bcf9-ad92-42bb-b49e-289ca95def9f-part15', 'scsi-SQEMU_QEMU_HARDDISK_2e41bcf9-ad92-42bb-b49e-289ca95def9f-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2e41bcf9-ad92-42bb-b49e-289ca95def9f-part1', 'scsi-SQEMU_QEMU_HARDDISK_2e41bcf9-ad92-42bb-b49e-289ca95def9f-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:14:07.353767 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:14:07.353915 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:14:07.353944 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:14:07.353965 | orchestrator | 2026-03-26 05:14:07.353984 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-26 05:14:07.354002 | orchestrator | Thursday 26 March 2026 05:13:36 +0000 (0:00:01.281) 0:11:00.152 ******** 2026-03-26 05:14:07.354133 | orchestrator | ok: [testbed-node-1] 2026-03-26 05:14:07.354172 | orchestrator | 2026-03-26 05:14:07.354181 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-26 05:14:07.354189 | orchestrator | Thursday 26 March 2026 05:13:38 +0000 (0:00:01.512) 0:11:01.665 ******** 2026-03-26 05:14:07.354197 | orchestrator | ok: [testbed-node-1] 2026-03-26 05:14:07.354205 | orchestrator | 2026-03-26 05:14:07.354213 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-26 05:14:07.354220 | orchestrator | Thursday 26 March 2026 05:13:39 +0000 (0:00:01.166) 0:11:02.831 ******** 2026-03-26 05:14:07.354228 | orchestrator | ok: [testbed-node-1] 2026-03-26 05:14:07.354236 | orchestrator | 2026-03-26 05:14:07.354244 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-26 05:14:07.354251 | orchestrator | Thursday 26 March 2026 05:13:40 +0000 (0:00:01.499) 0:11:04.331 ******** 2026-03-26 05:14:07.354260 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:14:07.354268 | orchestrator | 2026-03-26 05:14:07.354277 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-26 05:14:07.354286 | orchestrator | Thursday 26 March 2026 05:13:41 +0000 (0:00:01.115) 0:11:05.447 ******** 2026-03-26 05:14:07.354295 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:14:07.354304 | orchestrator | 2026-03-26 05:14:07.354313 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-26 05:14:07.354322 | orchestrator | Thursday 26 March 2026 05:13:43 +0000 (0:00:01.408) 0:11:06.855 ******** 2026-03-26 05:14:07.354331 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:14:07.354340 | orchestrator | 2026-03-26 05:14:07.354349 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-26 05:14:07.354358 | orchestrator | Thursday 26 March 2026 05:13:44 +0000 (0:00:01.164) 0:11:08.019 ******** 2026-03-26 05:14:07.354367 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-03-26 05:14:07.354376 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-26 05:14:07.354385 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-03-26 05:14:07.354394 | orchestrator | 2026-03-26 05:14:07.354404 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-26 05:14:07.354413 | orchestrator | Thursday 26 March 2026 05:13:46 +0000 (0:00:02.048) 0:11:10.068 ******** 2026-03-26 05:14:07.354422 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-26 05:14:07.354431 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-26 05:14:07.354440 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-26 05:14:07.354449 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:14:07.354459 | orchestrator | 2026-03-26 05:14:07.354468 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-26 05:14:07.354477 | orchestrator | Thursday 26 March 2026 05:13:47 +0000 (0:00:01.208) 0:11:11.277 ******** 2026-03-26 05:14:07.354487 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:14:07.354496 | orchestrator | 2026-03-26 05:14:07.354505 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-26 05:14:07.354514 | orchestrator | Thursday 26 March 2026 05:13:48 +0000 (0:00:01.111) 0:11:12.389 ******** 2026-03-26 05:14:07.354523 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-26 05:14:07.354533 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-26 05:14:07.354542 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-26 05:14:07.354551 | orchestrator | ok: [testbed-node-1 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-26 05:14:07.354560 | orchestrator | ok: [testbed-node-1 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-26 05:14:07.354581 | orchestrator | ok: [testbed-node-1 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-26 05:14:07.354607 | orchestrator | ok: [testbed-node-1 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-26 05:14:07.354615 | orchestrator | 2026-03-26 05:14:07.354630 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-26 05:14:07.354638 | orchestrator | Thursday 26 March 2026 05:13:50 +0000 (0:00:01.883) 0:11:14.272 ******** 2026-03-26 05:14:07.354645 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-26 05:14:07.354653 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-26 05:14:07.354661 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-26 05:14:07.354669 | orchestrator | ok: [testbed-node-1 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-26 05:14:07.354677 | orchestrator | ok: [testbed-node-1 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-26 05:14:07.354684 | orchestrator | ok: [testbed-node-1 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-26 05:14:07.354692 | orchestrator | ok: [testbed-node-1 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-26 05:14:07.354700 | orchestrator | 2026-03-26 05:14:07.354707 | orchestrator | TASK [Get ceph cluster status] ************************************************* 2026-03-26 05:14:07.354716 | orchestrator | Thursday 26 March 2026 05:13:53 +0000 (0:00:02.391) 0:11:16.664 ******** 2026-03-26 05:14:07.354724 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:14:07.354731 | orchestrator | 2026-03-26 05:14:07.354739 | orchestrator | TASK [Display ceph health detail] ********************************************** 2026-03-26 05:14:07.354747 | orchestrator | Thursday 26 March 2026 05:13:53 +0000 (0:00:00.872) 0:11:17.537 ******** 2026-03-26 05:14:07.354755 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:14:07.354762 | orchestrator | 2026-03-26 05:14:07.354770 | orchestrator | TASK [Fail if cluster isn't in an acceptable state] **************************** 2026-03-26 05:14:07.354778 | orchestrator | Thursday 26 March 2026 05:13:54 +0000 (0:00:00.876) 0:11:18.414 ******** 2026-03-26 05:14:07.354785 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:14:07.354793 | orchestrator | 2026-03-26 05:14:07.354801 | orchestrator | TASK [Get the ceph quorum status] ********************************************** 2026-03-26 05:14:07.354808 | orchestrator | Thursday 26 March 2026 05:13:55 +0000 (0:00:00.765) 0:11:19.180 ******** 2026-03-26 05:14:07.354816 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:14:07.354824 | orchestrator | 2026-03-26 05:14:07.354831 | orchestrator | TASK [Fail if the cluster quorum isn't in an acceptable state] ***************** 2026-03-26 05:14:07.354839 | orchestrator | Thursday 26 March 2026 05:13:56 +0000 (0:00:00.894) 0:11:20.074 ******** 2026-03-26 05:14:07.354847 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:14:07.354855 | orchestrator | 2026-03-26 05:14:07.354862 | orchestrator | TASK [Ensure /var/lib/ceph/bootstrap-rbd-mirror is present] ******************** 2026-03-26 05:14:07.354870 | orchestrator | Thursday 26 March 2026 05:13:57 +0000 (0:00:00.785) 0:11:20.859 ******** 2026-03-26 05:14:07.354878 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-26 05:14:07.354886 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-26 05:14:07.354893 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-26 05:14:07.354901 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:14:07.354909 | orchestrator | 2026-03-26 05:14:07.354916 | orchestrator | TASK [Create potentially missing keys (rbd and rbd-mirror)] ******************** 2026-03-26 05:14:07.354924 | orchestrator | Thursday 26 March 2026 05:13:58 +0000 (0:00:01.087) 0:11:21.947 ******** 2026-03-26 05:14:07.354932 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd', 'testbed-node-0'])  2026-03-26 05:14:07.354940 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd', 'testbed-node-1'])  2026-03-26 05:14:07.354948 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd', 'testbed-node-2'])  2026-03-26 05:14:07.354955 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd-mirror', 'testbed-node-0'])  2026-03-26 05:14:07.354963 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd-mirror', 'testbed-node-1'])  2026-03-26 05:14:07.354971 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd-mirror', 'testbed-node-2'])  2026-03-26 05:14:07.354984 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:14:07.354992 | orchestrator | 2026-03-26 05:14:07.355000 | orchestrator | TASK [Stop ceph mon] *********************************************************** 2026-03-26 05:14:07.355007 | orchestrator | Thursday 26 March 2026 05:13:59 +0000 (0:00:01.605) 0:11:23.552 ******** 2026-03-26 05:14:07.355015 | orchestrator | changed: [testbed-node-1] => (item=testbed-node-1) 2026-03-26 05:14:07.355023 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-26 05:14:07.355030 | orchestrator | 2026-03-26 05:14:07.355038 | orchestrator | TASK [Mask the mgr service] **************************************************** 2026-03-26 05:14:07.355046 | orchestrator | Thursday 26 March 2026 05:14:03 +0000 (0:00:03.184) 0:11:26.737 ******** 2026-03-26 05:14:07.355054 | orchestrator | changed: [testbed-node-1] 2026-03-26 05:14:07.355061 | orchestrator | 2026-03-26 05:14:07.355091 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-26 05:14:07.355098 | orchestrator | Thursday 26 March 2026 05:14:05 +0000 (0:00:01.984) 0:11:28.721 ******** 2026-03-26 05:14:07.355106 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-1 2026-03-26 05:14:07.355114 | orchestrator | 2026-03-26 05:14:07.355122 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-26 05:14:07.355130 | orchestrator | Thursday 26 March 2026 05:14:06 +0000 (0:00:01.137) 0:11:29.859 ******** 2026-03-26 05:14:07.355138 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-1 2026-03-26 05:14:07.355145 | orchestrator | 2026-03-26 05:14:07.355158 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-26 05:14:07.355171 | orchestrator | Thursday 26 March 2026 05:14:07 +0000 (0:00:01.138) 0:11:30.998 ******** 2026-03-26 05:14:50.326699 | orchestrator | ok: [testbed-node-1] 2026-03-26 05:14:50.326819 | orchestrator | 2026-03-26 05:14:50.326835 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-26 05:14:50.326849 | orchestrator | Thursday 26 March 2026 05:14:08 +0000 (0:00:01.518) 0:11:32.517 ******** 2026-03-26 05:14:50.326869 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:14:50.326890 | orchestrator | 2026-03-26 05:14:50.326904 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-26 05:14:50.326915 | orchestrator | Thursday 26 March 2026 05:14:09 +0000 (0:00:01.128) 0:11:33.645 ******** 2026-03-26 05:14:50.326925 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:14:50.326936 | orchestrator | 2026-03-26 05:14:50.326947 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-26 05:14:50.326958 | orchestrator | Thursday 26 March 2026 05:14:11 +0000 (0:00:01.116) 0:11:34.762 ******** 2026-03-26 05:14:50.326968 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:14:50.326979 | orchestrator | 2026-03-26 05:14:50.326989 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-26 05:14:50.327000 | orchestrator | Thursday 26 March 2026 05:14:12 +0000 (0:00:01.238) 0:11:36.000 ******** 2026-03-26 05:14:50.327011 | orchestrator | ok: [testbed-node-1] 2026-03-26 05:14:50.327022 | orchestrator | 2026-03-26 05:14:50.327032 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-26 05:14:50.327043 | orchestrator | Thursday 26 March 2026 05:14:13 +0000 (0:00:01.551) 0:11:37.552 ******** 2026-03-26 05:14:50.327053 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:14:50.327064 | orchestrator | 2026-03-26 05:14:50.327075 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-26 05:14:50.327085 | orchestrator | Thursday 26 March 2026 05:14:15 +0000 (0:00:01.151) 0:11:38.704 ******** 2026-03-26 05:14:50.327096 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:14:50.327106 | orchestrator | 2026-03-26 05:14:50.327151 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-26 05:14:50.327163 | orchestrator | Thursday 26 March 2026 05:14:16 +0000 (0:00:01.162) 0:11:39.866 ******** 2026-03-26 05:14:50.327201 | orchestrator | ok: [testbed-node-1] 2026-03-26 05:14:50.327213 | orchestrator | 2026-03-26 05:14:50.327226 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-26 05:14:50.327239 | orchestrator | Thursday 26 March 2026 05:14:17 +0000 (0:00:01.529) 0:11:41.396 ******** 2026-03-26 05:14:50.327251 | orchestrator | ok: [testbed-node-1] 2026-03-26 05:14:50.327263 | orchestrator | 2026-03-26 05:14:50.327275 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-26 05:14:50.327287 | orchestrator | Thursday 26 March 2026 05:14:19 +0000 (0:00:01.555) 0:11:42.951 ******** 2026-03-26 05:14:50.327299 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:14:50.327311 | orchestrator | 2026-03-26 05:14:50.327323 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-26 05:14:50.327335 | orchestrator | Thursday 26 March 2026 05:14:20 +0000 (0:00:00.806) 0:11:43.758 ******** 2026-03-26 05:14:50.327348 | orchestrator | ok: [testbed-node-1] 2026-03-26 05:14:50.327360 | orchestrator | 2026-03-26 05:14:50.327372 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-26 05:14:50.327385 | orchestrator | Thursday 26 March 2026 05:14:20 +0000 (0:00:00.824) 0:11:44.582 ******** 2026-03-26 05:14:50.327397 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:14:50.327407 | orchestrator | 2026-03-26 05:14:50.327418 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-26 05:14:50.327428 | orchestrator | Thursday 26 March 2026 05:14:21 +0000 (0:00:00.801) 0:11:45.383 ******** 2026-03-26 05:14:50.327439 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:14:50.327450 | orchestrator | 2026-03-26 05:14:50.327460 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-26 05:14:50.327471 | orchestrator | Thursday 26 March 2026 05:14:22 +0000 (0:00:00.798) 0:11:46.182 ******** 2026-03-26 05:14:50.327481 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:14:50.327492 | orchestrator | 2026-03-26 05:14:50.327502 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-26 05:14:50.327513 | orchestrator | Thursday 26 March 2026 05:14:23 +0000 (0:00:00.775) 0:11:46.957 ******** 2026-03-26 05:14:50.327523 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:14:50.327534 | orchestrator | 2026-03-26 05:14:50.327544 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-26 05:14:50.327555 | orchestrator | Thursday 26 March 2026 05:14:24 +0000 (0:00:00.805) 0:11:47.762 ******** 2026-03-26 05:14:50.327565 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:14:50.327576 | orchestrator | 2026-03-26 05:14:50.327586 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-26 05:14:50.327597 | orchestrator | Thursday 26 March 2026 05:14:24 +0000 (0:00:00.819) 0:11:48.582 ******** 2026-03-26 05:14:50.327607 | orchestrator | ok: [testbed-node-1] 2026-03-26 05:14:50.327618 | orchestrator | 2026-03-26 05:14:50.327628 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-26 05:14:50.327639 | orchestrator | Thursday 26 March 2026 05:14:25 +0000 (0:00:00.865) 0:11:49.448 ******** 2026-03-26 05:14:50.327649 | orchestrator | ok: [testbed-node-1] 2026-03-26 05:14:50.327660 | orchestrator | 2026-03-26 05:14:50.327670 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-26 05:14:50.327681 | orchestrator | Thursday 26 March 2026 05:14:26 +0000 (0:00:00.805) 0:11:50.253 ******** 2026-03-26 05:14:50.327693 | orchestrator | ok: [testbed-node-1] 2026-03-26 05:14:50.327711 | orchestrator | 2026-03-26 05:14:50.327730 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-03-26 05:14:50.327749 | orchestrator | Thursday 26 March 2026 05:14:27 +0000 (0:00:00.831) 0:11:51.085 ******** 2026-03-26 05:14:50.327762 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:14:50.327772 | orchestrator | 2026-03-26 05:14:50.327783 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-03-26 05:14:50.327794 | orchestrator | Thursday 26 March 2026 05:14:28 +0000 (0:00:00.766) 0:11:51.852 ******** 2026-03-26 05:14:50.327828 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:14:50.327840 | orchestrator | 2026-03-26 05:14:50.327851 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-03-26 05:14:50.327880 | orchestrator | Thursday 26 March 2026 05:14:28 +0000 (0:00:00.789) 0:11:52.642 ******** 2026-03-26 05:14:50.327891 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:14:50.327902 | orchestrator | 2026-03-26 05:14:50.327913 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-03-26 05:14:50.327923 | orchestrator | Thursday 26 March 2026 05:14:29 +0000 (0:00:00.879) 0:11:53.521 ******** 2026-03-26 05:14:50.327934 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:14:50.327944 | orchestrator | 2026-03-26 05:14:50.327955 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-03-26 05:14:50.327965 | orchestrator | Thursday 26 March 2026 05:14:30 +0000 (0:00:00.760) 0:11:54.281 ******** 2026-03-26 05:14:50.327976 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:14:50.327986 | orchestrator | 2026-03-26 05:14:50.327997 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-03-26 05:14:50.328007 | orchestrator | Thursday 26 March 2026 05:14:31 +0000 (0:00:00.787) 0:11:55.068 ******** 2026-03-26 05:14:50.328026 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:14:50.328045 | orchestrator | 2026-03-26 05:14:50.328061 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-03-26 05:14:50.328072 | orchestrator | Thursday 26 March 2026 05:14:32 +0000 (0:00:00.783) 0:11:55.852 ******** 2026-03-26 05:14:50.328083 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:14:50.328093 | orchestrator | 2026-03-26 05:14:50.328104 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-03-26 05:14:50.328157 | orchestrator | Thursday 26 March 2026 05:14:32 +0000 (0:00:00.782) 0:11:56.634 ******** 2026-03-26 05:14:50.328169 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:14:50.328180 | orchestrator | 2026-03-26 05:14:50.328190 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-03-26 05:14:50.328201 | orchestrator | Thursday 26 March 2026 05:14:33 +0000 (0:00:00.824) 0:11:57.459 ******** 2026-03-26 05:14:50.328211 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:14:50.328222 | orchestrator | 2026-03-26 05:14:50.328232 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-03-26 05:14:50.328243 | orchestrator | Thursday 26 March 2026 05:14:34 +0000 (0:00:00.765) 0:11:58.224 ******** 2026-03-26 05:14:50.328253 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:14:50.328264 | orchestrator | 2026-03-26 05:14:50.328275 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-03-26 05:14:50.328285 | orchestrator | Thursday 26 March 2026 05:14:35 +0000 (0:00:00.833) 0:11:59.058 ******** 2026-03-26 05:14:50.328296 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:14:50.328307 | orchestrator | 2026-03-26 05:14:50.328318 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-03-26 05:14:50.328337 | orchestrator | Thursday 26 March 2026 05:14:36 +0000 (0:00:00.780) 0:11:59.839 ******** 2026-03-26 05:14:50.328356 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:14:50.328371 | orchestrator | 2026-03-26 05:14:50.328381 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-26 05:14:50.328392 | orchestrator | Thursday 26 March 2026 05:14:37 +0000 (0:00:00.826) 0:12:00.665 ******** 2026-03-26 05:14:50.328402 | orchestrator | ok: [testbed-node-1] 2026-03-26 05:14:50.328413 | orchestrator | 2026-03-26 05:14:50.328424 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-26 05:14:50.328434 | orchestrator | Thursday 26 March 2026 05:14:38 +0000 (0:00:01.617) 0:12:02.283 ******** 2026-03-26 05:14:50.328445 | orchestrator | ok: [testbed-node-1] 2026-03-26 05:14:50.328456 | orchestrator | 2026-03-26 05:14:50.328466 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-26 05:14:50.328477 | orchestrator | Thursday 26 March 2026 05:14:40 +0000 (0:00:02.084) 0:12:04.367 ******** 2026-03-26 05:14:50.328496 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-1 2026-03-26 05:14:50.328508 | orchestrator | 2026-03-26 05:14:50.328518 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-26 05:14:50.328529 | orchestrator | Thursday 26 March 2026 05:14:41 +0000 (0:00:01.209) 0:12:05.576 ******** 2026-03-26 05:14:50.328540 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:14:50.328550 | orchestrator | 2026-03-26 05:14:50.328561 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-26 05:14:50.328572 | orchestrator | Thursday 26 March 2026 05:14:43 +0000 (0:00:01.134) 0:12:06.711 ******** 2026-03-26 05:14:50.328586 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:14:50.328606 | orchestrator | 2026-03-26 05:14:50.328623 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-26 05:14:50.328634 | orchestrator | Thursday 26 March 2026 05:14:44 +0000 (0:00:01.146) 0:12:07.858 ******** 2026-03-26 05:14:50.328644 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-26 05:14:50.328655 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-26 05:14:50.328666 | orchestrator | 2026-03-26 05:14:50.328676 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-26 05:14:50.328687 | orchestrator | Thursday 26 March 2026 05:14:46 +0000 (0:00:01.852) 0:12:09.711 ******** 2026-03-26 05:14:50.328698 | orchestrator | ok: [testbed-node-1] 2026-03-26 05:14:50.328708 | orchestrator | 2026-03-26 05:14:50.328719 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-26 05:14:50.328729 | orchestrator | Thursday 26 March 2026 05:14:47 +0000 (0:00:01.537) 0:12:11.249 ******** 2026-03-26 05:14:50.328740 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:14:50.328751 | orchestrator | 2026-03-26 05:14:50.328761 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-26 05:14:50.328772 | orchestrator | Thursday 26 March 2026 05:14:48 +0000 (0:00:01.162) 0:12:12.411 ******** 2026-03-26 05:14:50.328782 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:14:50.328793 | orchestrator | 2026-03-26 05:14:50.328809 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-26 05:14:50.328820 | orchestrator | Thursday 26 March 2026 05:14:49 +0000 (0:00:00.805) 0:12:13.216 ******** 2026-03-26 05:14:50.328840 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:15:30.256936 | orchestrator | 2026-03-26 05:15:30.257044 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-26 05:15:30.257059 | orchestrator | Thursday 26 March 2026 05:14:50 +0000 (0:00:00.755) 0:12:13.972 ******** 2026-03-26 05:15:30.257070 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-1 2026-03-26 05:15:30.257081 | orchestrator | 2026-03-26 05:15:30.257091 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-26 05:15:30.257100 | orchestrator | Thursday 26 March 2026 05:14:51 +0000 (0:00:01.138) 0:12:15.111 ******** 2026-03-26 05:15:30.257110 | orchestrator | ok: [testbed-node-1] 2026-03-26 05:15:30.257121 | orchestrator | 2026-03-26 05:15:30.257131 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-26 05:15:30.257141 | orchestrator | Thursday 26 March 2026 05:14:53 +0000 (0:00:01.856) 0:12:16.967 ******** 2026-03-26 05:15:30.257150 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-26 05:15:30.257213 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-26 05:15:30.257223 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-26 05:15:30.257233 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:15:30.257244 | orchestrator | 2026-03-26 05:15:30.257253 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-26 05:15:30.257263 | orchestrator | Thursday 26 March 2026 05:14:54 +0000 (0:00:01.130) 0:12:18.098 ******** 2026-03-26 05:15:30.257294 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:15:30.257305 | orchestrator | 2026-03-26 05:15:30.257314 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-26 05:15:30.257324 | orchestrator | Thursday 26 March 2026 05:14:55 +0000 (0:00:01.155) 0:12:19.253 ******** 2026-03-26 05:15:30.257333 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:15:30.257343 | orchestrator | 2026-03-26 05:15:30.257352 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-26 05:15:30.257362 | orchestrator | Thursday 26 March 2026 05:14:56 +0000 (0:00:01.211) 0:12:20.464 ******** 2026-03-26 05:15:30.257371 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:15:30.257380 | orchestrator | 2026-03-26 05:15:30.257390 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-26 05:15:30.257399 | orchestrator | Thursday 26 March 2026 05:14:58 +0000 (0:00:01.200) 0:12:21.665 ******** 2026-03-26 05:15:30.257409 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:15:30.257418 | orchestrator | 2026-03-26 05:15:30.257428 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-26 05:15:30.257437 | orchestrator | Thursday 26 March 2026 05:14:59 +0000 (0:00:01.192) 0:12:22.857 ******** 2026-03-26 05:15:30.257447 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:15:30.257457 | orchestrator | 2026-03-26 05:15:30.257467 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-26 05:15:30.257478 | orchestrator | Thursday 26 March 2026 05:15:00 +0000 (0:00:00.811) 0:12:23.669 ******** 2026-03-26 05:15:30.257489 | orchestrator | ok: [testbed-node-1] 2026-03-26 05:15:30.257500 | orchestrator | 2026-03-26 05:15:30.257510 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-26 05:15:30.257521 | orchestrator | Thursday 26 March 2026 05:15:02 +0000 (0:00:02.213) 0:12:25.883 ******** 2026-03-26 05:15:30.257532 | orchestrator | ok: [testbed-node-1] 2026-03-26 05:15:30.257542 | orchestrator | 2026-03-26 05:15:30.257553 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-26 05:15:30.257563 | orchestrator | Thursday 26 March 2026 05:15:03 +0000 (0:00:00.787) 0:12:26.671 ******** 2026-03-26 05:15:30.257574 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-1 2026-03-26 05:15:30.257585 | orchestrator | 2026-03-26 05:15:30.257595 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-26 05:15:30.257606 | orchestrator | Thursday 26 March 2026 05:15:04 +0000 (0:00:01.115) 0:12:27.787 ******** 2026-03-26 05:15:30.257617 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:15:30.257628 | orchestrator | 2026-03-26 05:15:30.257639 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-26 05:15:30.257649 | orchestrator | Thursday 26 March 2026 05:15:05 +0000 (0:00:01.188) 0:12:28.975 ******** 2026-03-26 05:15:30.257660 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:15:30.257670 | orchestrator | 2026-03-26 05:15:30.257681 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-26 05:15:30.257692 | orchestrator | Thursday 26 March 2026 05:15:06 +0000 (0:00:01.138) 0:12:30.114 ******** 2026-03-26 05:15:30.257702 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:15:30.257714 | orchestrator | 2026-03-26 05:15:30.257724 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-26 05:15:30.257735 | orchestrator | Thursday 26 March 2026 05:15:07 +0000 (0:00:01.115) 0:12:31.230 ******** 2026-03-26 05:15:30.257746 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:15:30.257757 | orchestrator | 2026-03-26 05:15:30.257768 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-26 05:15:30.257778 | orchestrator | Thursday 26 March 2026 05:15:08 +0000 (0:00:01.155) 0:12:32.385 ******** 2026-03-26 05:15:30.257789 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:15:30.257800 | orchestrator | 2026-03-26 05:15:30.257811 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-26 05:15:30.257828 | orchestrator | Thursday 26 March 2026 05:15:09 +0000 (0:00:01.131) 0:12:33.516 ******** 2026-03-26 05:15:30.257839 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:15:30.257849 | orchestrator | 2026-03-26 05:15:30.257858 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-26 05:15:30.257868 | orchestrator | Thursday 26 March 2026 05:15:10 +0000 (0:00:01.140) 0:12:34.656 ******** 2026-03-26 05:15:30.257892 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:15:30.257902 | orchestrator | 2026-03-26 05:15:30.257912 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-26 05:15:30.257937 | orchestrator | Thursday 26 March 2026 05:15:12 +0000 (0:00:01.206) 0:12:35.862 ******** 2026-03-26 05:15:30.257948 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:15:30.257957 | orchestrator | 2026-03-26 05:15:30.257967 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-26 05:15:30.257976 | orchestrator | Thursday 26 March 2026 05:15:13 +0000 (0:00:01.119) 0:12:36.982 ******** 2026-03-26 05:15:30.257986 | orchestrator | ok: [testbed-node-1] 2026-03-26 05:15:30.257995 | orchestrator | 2026-03-26 05:15:30.258005 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-26 05:15:30.258014 | orchestrator | Thursday 26 March 2026 05:15:14 +0000 (0:00:00.790) 0:12:37.773 ******** 2026-03-26 05:15:30.258082 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-1 2026-03-26 05:15:30.258092 | orchestrator | 2026-03-26 05:15:30.258101 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-26 05:15:30.258111 | orchestrator | Thursday 26 March 2026 05:15:15 +0000 (0:00:01.126) 0:12:38.899 ******** 2026-03-26 05:15:30.258121 | orchestrator | ok: [testbed-node-1] => (item=/etc/ceph) 2026-03-26 05:15:30.258131 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/) 2026-03-26 05:15:30.258140 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-03-26 05:15:30.258150 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-03-26 05:15:30.258177 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-03-26 05:15:30.258187 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-03-26 05:15:30.258197 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-03-26 05:15:30.258206 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-03-26 05:15:30.258216 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-26 05:15:30.258225 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-26 05:15:30.258235 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-26 05:15:30.258244 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-26 05:15:30.258254 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-26 05:15:30.258263 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-26 05:15:30.258272 | orchestrator | ok: [testbed-node-1] => (item=/var/run/ceph) 2026-03-26 05:15:30.258282 | orchestrator | ok: [testbed-node-1] => (item=/var/log/ceph) 2026-03-26 05:15:30.258291 | orchestrator | 2026-03-26 05:15:30.258301 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-26 05:15:30.258310 | orchestrator | Thursday 26 March 2026 05:15:21 +0000 (0:00:06.237) 0:12:45.137 ******** 2026-03-26 05:15:30.258320 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:15:30.258329 | orchestrator | 2026-03-26 05:15:30.258339 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-26 05:15:30.258348 | orchestrator | Thursday 26 March 2026 05:15:22 +0000 (0:00:00.794) 0:12:45.931 ******** 2026-03-26 05:15:30.258358 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:15:30.258367 | orchestrator | 2026-03-26 05:15:30.258377 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-26 05:15:30.258386 | orchestrator | Thursday 26 March 2026 05:15:23 +0000 (0:00:00.780) 0:12:46.711 ******** 2026-03-26 05:15:30.258403 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:15:30.258413 | orchestrator | 2026-03-26 05:15:30.258422 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-26 05:15:30.258432 | orchestrator | Thursday 26 March 2026 05:15:23 +0000 (0:00:00.787) 0:12:47.498 ******** 2026-03-26 05:15:30.258441 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:15:30.258451 | orchestrator | 2026-03-26 05:15:30.258460 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-26 05:15:30.258470 | orchestrator | Thursday 26 March 2026 05:15:24 +0000 (0:00:00.796) 0:12:48.295 ******** 2026-03-26 05:15:30.258479 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:15:30.258489 | orchestrator | 2026-03-26 05:15:30.258498 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-26 05:15:30.258508 | orchestrator | Thursday 26 March 2026 05:15:25 +0000 (0:00:00.840) 0:12:49.136 ******** 2026-03-26 05:15:30.258517 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:15:30.258527 | orchestrator | 2026-03-26 05:15:30.258536 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-26 05:15:30.258546 | orchestrator | Thursday 26 March 2026 05:15:26 +0000 (0:00:00.814) 0:12:49.951 ******** 2026-03-26 05:15:30.258555 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:15:30.258565 | orchestrator | 2026-03-26 05:15:30.258574 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-26 05:15:30.258584 | orchestrator | Thursday 26 March 2026 05:15:27 +0000 (0:00:00.781) 0:12:50.733 ******** 2026-03-26 05:15:30.258593 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:15:30.258603 | orchestrator | 2026-03-26 05:15:30.258612 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-26 05:15:30.258622 | orchestrator | Thursday 26 March 2026 05:15:27 +0000 (0:00:00.766) 0:12:51.499 ******** 2026-03-26 05:15:30.258631 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:15:30.258641 | orchestrator | 2026-03-26 05:15:30.258650 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-26 05:15:30.258660 | orchestrator | Thursday 26 March 2026 05:15:28 +0000 (0:00:00.783) 0:12:52.283 ******** 2026-03-26 05:15:30.258669 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:15:30.258678 | orchestrator | 2026-03-26 05:15:30.258688 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-26 05:15:30.258702 | orchestrator | Thursday 26 March 2026 05:15:29 +0000 (0:00:00.813) 0:12:53.097 ******** 2026-03-26 05:15:30.258712 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:15:30.258721 | orchestrator | 2026-03-26 05:15:30.258738 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-26 05:16:16.996304 | orchestrator | Thursday 26 March 2026 05:15:30 +0000 (0:00:00.804) 0:12:53.902 ******** 2026-03-26 05:16:16.996414 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:16:16.996429 | orchestrator | 2026-03-26 05:16:16.996440 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-26 05:16:16.996451 | orchestrator | Thursday 26 March 2026 05:15:31 +0000 (0:00:00.800) 0:12:54.702 ******** 2026-03-26 05:16:16.996460 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:16:16.996470 | orchestrator | 2026-03-26 05:16:16.996480 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-26 05:16:16.996490 | orchestrator | Thursday 26 March 2026 05:15:31 +0000 (0:00:00.854) 0:12:55.556 ******** 2026-03-26 05:16:16.996499 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:16:16.996509 | orchestrator | 2026-03-26 05:16:16.996519 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-26 05:16:16.996528 | orchestrator | Thursday 26 March 2026 05:15:32 +0000 (0:00:00.833) 0:12:56.390 ******** 2026-03-26 05:16:16.996538 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:16:16.996547 | orchestrator | 2026-03-26 05:16:16.996557 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-26 05:16:16.996589 | orchestrator | Thursday 26 March 2026 05:15:33 +0000 (0:00:00.875) 0:12:57.266 ******** 2026-03-26 05:16:16.996599 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:16:16.996609 | orchestrator | 2026-03-26 05:16:16.996619 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-26 05:16:16.996628 | orchestrator | Thursday 26 March 2026 05:15:34 +0000 (0:00:00.770) 0:12:58.037 ******** 2026-03-26 05:16:16.996638 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:16:16.996647 | orchestrator | 2026-03-26 05:16:16.996657 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-26 05:16:16.996668 | orchestrator | Thursday 26 March 2026 05:15:35 +0000 (0:00:00.773) 0:12:58.811 ******** 2026-03-26 05:16:16.996677 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:16:16.996687 | orchestrator | 2026-03-26 05:16:16.996696 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-26 05:16:16.996706 | orchestrator | Thursday 26 March 2026 05:15:35 +0000 (0:00:00.771) 0:12:59.582 ******** 2026-03-26 05:16:16.996715 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:16:16.996724 | orchestrator | 2026-03-26 05:16:16.996734 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-26 05:16:16.996743 | orchestrator | Thursday 26 March 2026 05:15:36 +0000 (0:00:00.809) 0:13:00.392 ******** 2026-03-26 05:16:16.996753 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:16:16.996762 | orchestrator | 2026-03-26 05:16:16.996771 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-26 05:16:16.996782 | orchestrator | Thursday 26 March 2026 05:15:37 +0000 (0:00:00.774) 0:13:01.167 ******** 2026-03-26 05:16:16.996793 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:16:16.996803 | orchestrator | 2026-03-26 05:16:16.996814 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-26 05:16:16.996824 | orchestrator | Thursday 26 March 2026 05:15:38 +0000 (0:00:00.824) 0:13:01.992 ******** 2026-03-26 05:16:16.996836 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-03-26 05:16:16.996860 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-03-26 05:16:16.996871 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-03-26 05:16:16.996882 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:16:16.996892 | orchestrator | 2026-03-26 05:16:16.996903 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-26 05:16:16.996914 | orchestrator | Thursday 26 March 2026 05:15:39 +0000 (0:00:01.110) 0:13:03.102 ******** 2026-03-26 05:16:16.996925 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-03-26 05:16:16.996935 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-03-26 05:16:16.996946 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-03-26 05:16:16.996959 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:16:16.996970 | orchestrator | 2026-03-26 05:16:16.996980 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-26 05:16:16.996990 | orchestrator | Thursday 26 March 2026 05:15:40 +0000 (0:00:01.063) 0:13:04.165 ******** 2026-03-26 05:16:16.997001 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-03-26 05:16:16.997012 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-03-26 05:16:16.997022 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-03-26 05:16:16.997033 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:16:16.997044 | orchestrator | 2026-03-26 05:16:16.997055 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-26 05:16:16.997066 | orchestrator | Thursday 26 March 2026 05:15:41 +0000 (0:00:01.073) 0:13:05.239 ******** 2026-03-26 05:16:16.997076 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:16:16.997087 | orchestrator | 2026-03-26 05:16:16.997098 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-26 05:16:16.997116 | orchestrator | Thursday 26 March 2026 05:15:42 +0000 (0:00:00.768) 0:13:06.008 ******** 2026-03-26 05:16:16.997128 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-03-26 05:16:16.997138 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:16:16.997149 | orchestrator | 2026-03-26 05:16:16.997158 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-26 05:16:16.997168 | orchestrator | Thursday 26 March 2026 05:15:43 +0000 (0:00:00.888) 0:13:06.896 ******** 2026-03-26 05:16:16.997177 | orchestrator | changed: [testbed-node-1] 2026-03-26 05:16:16.997187 | orchestrator | 2026-03-26 05:16:16.997196 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-03-26 05:16:16.997236 | orchestrator | Thursday 26 March 2026 05:15:44 +0000 (0:00:01.395) 0:13:08.292 ******** 2026-03-26 05:16:16.997246 | orchestrator | ok: [testbed-node-1] 2026-03-26 05:16:16.997256 | orchestrator | 2026-03-26 05:16:16.997266 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-03-26 05:16:16.997292 | orchestrator | Thursday 26 March 2026 05:15:45 +0000 (0:00:00.865) 0:13:09.157 ******** 2026-03-26 05:16:16.997302 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-1 2026-03-26 05:16:16.997312 | orchestrator | 2026-03-26 05:16:16.997322 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-03-26 05:16:16.997331 | orchestrator | Thursday 26 March 2026 05:15:46 +0000 (0:00:01.241) 0:13:10.398 ******** 2026-03-26 05:16:16.997341 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] 2026-03-26 05:16:16.997351 | orchestrator | 2026-03-26 05:16:16.997360 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-03-26 05:16:16.997369 | orchestrator | Thursday 26 March 2026 05:15:49 +0000 (0:00:03.163) 0:13:13.562 ******** 2026-03-26 05:16:16.997379 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:16:16.997388 | orchestrator | 2026-03-26 05:16:16.997398 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-03-26 05:16:16.997407 | orchestrator | Thursday 26 March 2026 05:15:51 +0000 (0:00:01.211) 0:13:14.774 ******** 2026-03-26 05:16:16.997417 | orchestrator | ok: [testbed-node-1] 2026-03-26 05:16:16.997426 | orchestrator | 2026-03-26 05:16:16.997436 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-03-26 05:16:16.997445 | orchestrator | Thursday 26 March 2026 05:15:52 +0000 (0:00:01.156) 0:13:15.930 ******** 2026-03-26 05:16:16.997454 | orchestrator | ok: [testbed-node-1] 2026-03-26 05:16:16.997464 | orchestrator | 2026-03-26 05:16:16.997473 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-03-26 05:16:16.997483 | orchestrator | Thursday 26 March 2026 05:15:53 +0000 (0:00:01.142) 0:13:17.073 ******** 2026-03-26 05:16:16.997492 | orchestrator | changed: [testbed-node-1] 2026-03-26 05:16:16.997502 | orchestrator | 2026-03-26 05:16:16.997511 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-03-26 05:16:16.997520 | orchestrator | Thursday 26 March 2026 05:15:55 +0000 (0:00:02.017) 0:13:19.091 ******** 2026-03-26 05:16:16.997530 | orchestrator | ok: [testbed-node-1] 2026-03-26 05:16:16.997539 | orchestrator | 2026-03-26 05:16:16.997549 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-03-26 05:16:16.997558 | orchestrator | Thursday 26 March 2026 05:15:56 +0000 (0:00:01.554) 0:13:20.646 ******** 2026-03-26 05:16:16.997568 | orchestrator | ok: [testbed-node-1] 2026-03-26 05:16:16.997577 | orchestrator | 2026-03-26 05:16:16.997586 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-03-26 05:16:16.997596 | orchestrator | Thursday 26 March 2026 05:15:58 +0000 (0:00:01.544) 0:13:22.190 ******** 2026-03-26 05:16:16.997605 | orchestrator | ok: [testbed-node-1] 2026-03-26 05:16:16.997615 | orchestrator | 2026-03-26 05:16:16.997624 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-03-26 05:16:16.997634 | orchestrator | Thursday 26 March 2026 05:16:00 +0000 (0:00:01.473) 0:13:23.663 ******** 2026-03-26 05:16:16.997643 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] 2026-03-26 05:16:16.997661 | orchestrator | 2026-03-26 05:16:16.997677 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-03-26 05:16:16.997693 | orchestrator | Thursday 26 March 2026 05:16:01 +0000 (0:00:01.616) 0:13:25.281 ******** 2026-03-26 05:16:16.997708 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] 2026-03-26 05:16:16.997722 | orchestrator | 2026-03-26 05:16:16.997737 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-03-26 05:16:16.997753 | orchestrator | Thursday 26 March 2026 05:16:03 +0000 (0:00:01.538) 0:13:26.819 ******** 2026-03-26 05:16:16.997769 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-26 05:16:16.997785 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-03-26 05:16:16.997802 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-26 05:16:16.997818 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2026-03-26 05:16:16.997832 | orchestrator | 2026-03-26 05:16:16.997842 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-03-26 05:16:16.997851 | orchestrator | Thursday 26 March 2026 05:16:07 +0000 (0:00:04.191) 0:13:31.011 ******** 2026-03-26 05:16:16.997861 | orchestrator | changed: [testbed-node-1] 2026-03-26 05:16:16.997870 | orchestrator | 2026-03-26 05:16:16.997880 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-03-26 05:16:16.997889 | orchestrator | Thursday 26 March 2026 05:16:09 +0000 (0:00:02.026) 0:13:33.038 ******** 2026-03-26 05:16:16.997898 | orchestrator | ok: [testbed-node-1] 2026-03-26 05:16:16.997908 | orchestrator | 2026-03-26 05:16:16.997917 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-03-26 05:16:16.997927 | orchestrator | Thursday 26 March 2026 05:16:10 +0000 (0:00:01.175) 0:13:34.213 ******** 2026-03-26 05:16:16.997936 | orchestrator | ok: [testbed-node-1] 2026-03-26 05:16:16.997945 | orchestrator | 2026-03-26 05:16:16.997955 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-03-26 05:16:16.997964 | orchestrator | Thursday 26 March 2026 05:16:11 +0000 (0:00:01.129) 0:13:35.343 ******** 2026-03-26 05:16:16.997973 | orchestrator | ok: [testbed-node-1] 2026-03-26 05:16:16.997983 | orchestrator | 2026-03-26 05:16:16.997992 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-03-26 05:16:16.998001 | orchestrator | Thursday 26 March 2026 05:16:13 +0000 (0:00:01.741) 0:13:37.085 ******** 2026-03-26 05:16:16.998011 | orchestrator | ok: [testbed-node-1] 2026-03-26 05:16:16.998071 | orchestrator | 2026-03-26 05:16:16.998082 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-03-26 05:16:16.998091 | orchestrator | Thursday 26 March 2026 05:16:15 +0000 (0:00:01.576) 0:13:38.661 ******** 2026-03-26 05:16:16.998100 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:16:16.998110 | orchestrator | 2026-03-26 05:16:16.998119 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-03-26 05:16:16.998135 | orchestrator | Thursday 26 March 2026 05:16:15 +0000 (0:00:00.808) 0:13:39.470 ******** 2026-03-26 05:16:16.998154 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-1 2026-03-26 05:16:16.998164 | orchestrator | 2026-03-26 05:16:16.998181 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-03-26 05:17:23.223234 | orchestrator | Thursday 26 March 2026 05:16:16 +0000 (0:00:01.167) 0:13:40.638 ******** 2026-03-26 05:17:23.223403 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:17:23.223421 | orchestrator | 2026-03-26 05:17:23.223434 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-03-26 05:17:23.223445 | orchestrator | Thursday 26 March 2026 05:16:18 +0000 (0:00:01.170) 0:13:41.809 ******** 2026-03-26 05:17:23.223456 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:17:23.223467 | orchestrator | 2026-03-26 05:17:23.223478 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-03-26 05:17:23.223490 | orchestrator | Thursday 26 March 2026 05:16:19 +0000 (0:00:01.165) 0:13:42.974 ******** 2026-03-26 05:17:23.223526 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-1 2026-03-26 05:17:23.223538 | orchestrator | 2026-03-26 05:17:23.223549 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-03-26 05:17:23.223560 | orchestrator | Thursday 26 March 2026 05:16:20 +0000 (0:00:01.140) 0:13:44.114 ******** 2026-03-26 05:17:23.223571 | orchestrator | changed: [testbed-node-1] 2026-03-26 05:17:23.223582 | orchestrator | 2026-03-26 05:17:23.223593 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-03-26 05:17:23.223604 | orchestrator | Thursday 26 March 2026 05:16:23 +0000 (0:00:02.643) 0:13:46.758 ******** 2026-03-26 05:17:23.223628 | orchestrator | ok: [testbed-node-1] 2026-03-26 05:17:23.223640 | orchestrator | 2026-03-26 05:17:23.223651 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-03-26 05:17:23.223673 | orchestrator | Thursday 26 March 2026 05:16:25 +0000 (0:00:01.972) 0:13:48.730 ******** 2026-03-26 05:17:23.223684 | orchestrator | ok: [testbed-node-1] 2026-03-26 05:17:23.223695 | orchestrator | 2026-03-26 05:17:23.223705 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-03-26 05:17:23.223716 | orchestrator | Thursday 26 March 2026 05:16:27 +0000 (0:00:02.330) 0:13:51.061 ******** 2026-03-26 05:17:23.223727 | orchestrator | changed: [testbed-node-1] 2026-03-26 05:17:23.223738 | orchestrator | 2026-03-26 05:17:23.223752 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-03-26 05:17:23.223763 | orchestrator | Thursday 26 March 2026 05:16:30 +0000 (0:00:03.186) 0:13:54.247 ******** 2026-03-26 05:17:23.223777 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-1 2026-03-26 05:17:23.223790 | orchestrator | 2026-03-26 05:17:23.223808 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-03-26 05:17:23.223827 | orchestrator | Thursday 26 March 2026 05:16:31 +0000 (0:00:01.118) 0:13:55.366 ******** 2026-03-26 05:17:23.223845 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-03-26 05:17:23.223865 | orchestrator | ok: [testbed-node-1] 2026-03-26 05:17:23.223885 | orchestrator | 2026-03-26 05:17:23.223906 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-03-26 05:17:23.223925 | orchestrator | Thursday 26 March 2026 05:16:54 +0000 (0:00:22.816) 0:14:18.182 ******** 2026-03-26 05:17:23.223945 | orchestrator | ok: [testbed-node-1] 2026-03-26 05:17:23.223958 | orchestrator | 2026-03-26 05:17:23.223971 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-03-26 05:17:23.223984 | orchestrator | Thursday 26 March 2026 05:16:57 +0000 (0:00:02.677) 0:14:20.859 ******** 2026-03-26 05:17:23.223997 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:17:23.224009 | orchestrator | 2026-03-26 05:17:23.224021 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-03-26 05:17:23.224034 | orchestrator | Thursday 26 March 2026 05:16:58 +0000 (0:00:00.807) 0:14:21.667 ******** 2026-03-26 05:17:23.224049 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__77be735a24b84ab013e2e181233ee69b9d9f8b69'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-03-26 05:17:23.224065 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__77be735a24b84ab013e2e181233ee69b9d9f8b69'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-03-26 05:17:23.224078 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__77be735a24b84ab013e2e181233ee69b9d9f8b69'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-03-26 05:17:23.224116 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__77be735a24b84ab013e2e181233ee69b9d9f8b69'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-03-26 05:17:23.224149 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__77be735a24b84ab013e2e181233ee69b9d9f8b69'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-03-26 05:17:23.224163 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__77be735a24b84ab013e2e181233ee69b9d9f8b69'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__77be735a24b84ab013e2e181233ee69b9d9f8b69'}])  2026-03-26 05:17:23.224177 | orchestrator | 2026-03-26 05:17:23.224188 | orchestrator | TASK [Start ceph mgr] ********************************************************** 2026-03-26 05:17:23.224199 | orchestrator | Thursday 26 March 2026 05:17:07 +0000 (0:00:09.327) 0:14:30.995 ******** 2026-03-26 05:17:23.224210 | orchestrator | changed: [testbed-node-1] 2026-03-26 05:17:23.224221 | orchestrator | 2026-03-26 05:17:23.224232 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-26 05:17:23.224243 | orchestrator | Thursday 26 March 2026 05:17:09 +0000 (0:00:02.117) 0:14:33.112 ******** 2026-03-26 05:17:23.224253 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-26 05:17:23.224264 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-1) 2026-03-26 05:17:23.224316 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-2) 2026-03-26 05:17:23.224330 | orchestrator | 2026-03-26 05:17:23.224341 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-26 05:17:23.224351 | orchestrator | Thursday 26 March 2026 05:17:11 +0000 (0:00:01.877) 0:14:34.989 ******** 2026-03-26 05:17:23.224362 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-26 05:17:23.224373 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-26 05:17:23.224384 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-26 05:17:23.224394 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:17:23.224405 | orchestrator | 2026-03-26 05:17:23.224416 | orchestrator | TASK [Non container | waiting for the monitor to join the quorum...] *********** 2026-03-26 05:17:23.224426 | orchestrator | Thursday 26 March 2026 05:17:12 +0000 (0:00:01.109) 0:14:36.099 ******** 2026-03-26 05:17:23.224437 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:17:23.224448 | orchestrator | 2026-03-26 05:17:23.224459 | orchestrator | TASK [Container | waiting for the containerized monitor to join the quorum...] *** 2026-03-26 05:17:23.224469 | orchestrator | Thursday 26 March 2026 05:17:13 +0000 (0:00:00.761) 0:14:36.861 ******** 2026-03-26 05:17:23.224480 | orchestrator | ok: [testbed-node-1] 2026-03-26 05:17:23.224491 | orchestrator | 2026-03-26 05:17:23.224501 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-26 05:17:23.224512 | orchestrator | Thursday 26 March 2026 05:17:15 +0000 (0:00:02.347) 0:14:39.208 ******** 2026-03-26 05:17:23.224523 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:17:23.224533 | orchestrator | 2026-03-26 05:17:23.224544 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-03-26 05:17:23.224563 | orchestrator | Thursday 26 March 2026 05:17:16 +0000 (0:00:00.800) 0:14:40.008 ******** 2026-03-26 05:17:23.224573 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:17:23.224584 | orchestrator | 2026-03-26 05:17:23.224595 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-03-26 05:17:23.224606 | orchestrator | Thursday 26 March 2026 05:17:17 +0000 (0:00:00.794) 0:14:40.803 ******** 2026-03-26 05:17:23.224616 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:17:23.224627 | orchestrator | 2026-03-26 05:17:23.224638 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-03-26 05:17:23.224649 | orchestrator | Thursday 26 March 2026 05:17:17 +0000 (0:00:00.773) 0:14:41.577 ******** 2026-03-26 05:17:23.224659 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:17:23.224670 | orchestrator | 2026-03-26 05:17:23.224681 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-03-26 05:17:23.224692 | orchestrator | Thursday 26 March 2026 05:17:18 +0000 (0:00:00.798) 0:14:42.375 ******** 2026-03-26 05:17:23.224702 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:17:23.224713 | orchestrator | 2026-03-26 05:17:23.224724 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-03-26 05:17:23.224735 | orchestrator | Thursday 26 March 2026 05:17:19 +0000 (0:00:00.784) 0:14:43.160 ******** 2026-03-26 05:17:23.224746 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:17:23.224756 | orchestrator | 2026-03-26 05:17:23.224767 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-03-26 05:17:23.224778 | orchestrator | Thursday 26 March 2026 05:17:20 +0000 (0:00:00.800) 0:14:43.960 ******** 2026-03-26 05:17:23.224788 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:17:23.224799 | orchestrator | 2026-03-26 05:17:23.224810 | orchestrator | PLAY [Upgrade ceph mon cluster] ************************************************ 2026-03-26 05:17:23.224821 | orchestrator | 2026-03-26 05:17:23.224831 | orchestrator | TASK [Remove ceph aliases] ***************************************************** 2026-03-26 05:17:23.224842 | orchestrator | Thursday 26 March 2026 05:17:21 +0000 (0:00:00.955) 0:14:44.915 ******** 2026-03-26 05:17:23.224857 | orchestrator | ok: [testbed-node-2] 2026-03-26 05:17:23.224878 | orchestrator | 2026-03-26 05:17:23.224906 | orchestrator | TASK [Set mon_host_count] ****************************************************** 2026-03-26 05:17:23.224927 | orchestrator | Thursday 26 March 2026 05:17:22 +0000 (0:00:01.157) 0:14:46.072 ******** 2026-03-26 05:17:23.224946 | orchestrator | ok: [testbed-node-2] 2026-03-26 05:17:23.224962 | orchestrator | 2026-03-26 05:17:23.224973 | orchestrator | TASK [Fail when less than three monitors] ************************************** 2026-03-26 05:17:23.224992 | orchestrator | Thursday 26 March 2026 05:17:23 +0000 (0:00:00.794) 0:14:46.867 ******** 2026-03-26 05:17:48.158659 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:17:48.158773 | orchestrator | 2026-03-26 05:17:48.158789 | orchestrator | TASK [Select a running monitor] ************************************************ 2026-03-26 05:17:48.158801 | orchestrator | Thursday 26 March 2026 05:17:23 +0000 (0:00:00.779) 0:14:47.647 ******** 2026-03-26 05:17:48.158811 | orchestrator | ok: [testbed-node-2] 2026-03-26 05:17:48.158821 | orchestrator | 2026-03-26 05:17:48.158831 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-26 05:17:48.158841 | orchestrator | Thursday 26 March 2026 05:17:24 +0000 (0:00:00.820) 0:14:48.468 ******** 2026-03-26 05:17:48.158851 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-2 2026-03-26 05:17:48.158860 | orchestrator | 2026-03-26 05:17:48.158870 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-26 05:17:48.158879 | orchestrator | Thursday 26 March 2026 05:17:25 +0000 (0:00:01.154) 0:14:49.622 ******** 2026-03-26 05:17:48.158889 | orchestrator | ok: [testbed-node-2] 2026-03-26 05:17:48.158898 | orchestrator | 2026-03-26 05:17:48.158908 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-26 05:17:48.158917 | orchestrator | Thursday 26 March 2026 05:17:27 +0000 (0:00:01.440) 0:14:51.063 ******** 2026-03-26 05:17:48.158927 | orchestrator | ok: [testbed-node-2] 2026-03-26 05:17:48.158957 | orchestrator | 2026-03-26 05:17:48.158968 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-26 05:17:48.158977 | orchestrator | Thursday 26 March 2026 05:17:28 +0000 (0:00:01.110) 0:14:52.173 ******** 2026-03-26 05:17:48.158987 | orchestrator | ok: [testbed-node-2] 2026-03-26 05:17:48.158996 | orchestrator | 2026-03-26 05:17:48.159006 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-26 05:17:48.159015 | orchestrator | Thursday 26 March 2026 05:17:29 +0000 (0:00:01.443) 0:14:53.617 ******** 2026-03-26 05:17:48.159025 | orchestrator | ok: [testbed-node-2] 2026-03-26 05:17:48.159034 | orchestrator | 2026-03-26 05:17:48.159044 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-26 05:17:48.159053 | orchestrator | Thursday 26 March 2026 05:17:31 +0000 (0:00:01.175) 0:14:54.793 ******** 2026-03-26 05:17:48.159064 | orchestrator | ok: [testbed-node-2] 2026-03-26 05:17:48.159073 | orchestrator | 2026-03-26 05:17:48.159083 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-26 05:17:48.159092 | orchestrator | Thursday 26 March 2026 05:17:32 +0000 (0:00:01.169) 0:14:55.963 ******** 2026-03-26 05:17:48.159101 | orchestrator | ok: [testbed-node-2] 2026-03-26 05:17:48.159111 | orchestrator | 2026-03-26 05:17:48.159120 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-26 05:17:48.159131 | orchestrator | Thursday 26 March 2026 05:17:33 +0000 (0:00:01.280) 0:14:57.244 ******** 2026-03-26 05:17:48.159140 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:17:48.159150 | orchestrator | 2026-03-26 05:17:48.159160 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-26 05:17:48.159169 | orchestrator | Thursday 26 March 2026 05:17:34 +0000 (0:00:01.231) 0:14:58.476 ******** 2026-03-26 05:17:48.159178 | orchestrator | ok: [testbed-node-2] 2026-03-26 05:17:48.159188 | orchestrator | 2026-03-26 05:17:48.159199 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-26 05:17:48.159210 | orchestrator | Thursday 26 March 2026 05:17:35 +0000 (0:00:01.169) 0:14:59.645 ******** 2026-03-26 05:17:48.159222 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-26 05:17:48.159233 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-26 05:17:48.159244 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-26 05:17:48.159254 | orchestrator | 2026-03-26 05:17:48.159265 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-26 05:17:48.159275 | orchestrator | Thursday 26 March 2026 05:17:38 +0000 (0:00:02.218) 0:15:01.863 ******** 2026-03-26 05:17:48.159286 | orchestrator | ok: [testbed-node-2] 2026-03-26 05:17:48.159323 | orchestrator | 2026-03-26 05:17:48.159335 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-26 05:17:48.159346 | orchestrator | Thursday 26 March 2026 05:17:39 +0000 (0:00:01.300) 0:15:03.164 ******** 2026-03-26 05:17:48.159356 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-26 05:17:48.159367 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-26 05:17:48.159378 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-26 05:17:48.159389 | orchestrator | 2026-03-26 05:17:48.159400 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-26 05:17:48.159411 | orchestrator | Thursday 26 March 2026 05:17:42 +0000 (0:00:03.182) 0:15:06.346 ******** 2026-03-26 05:17:48.159421 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-26 05:17:48.159433 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-26 05:17:48.159444 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-26 05:17:48.159454 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:17:48.159465 | orchestrator | 2026-03-26 05:17:48.159476 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-26 05:17:48.159498 | orchestrator | Thursday 26 March 2026 05:17:44 +0000 (0:00:01.424) 0:15:07.771 ******** 2026-03-26 05:17:48.159525 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-26 05:17:48.159540 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-26 05:17:48.159567 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-26 05:17:48.159578 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:17:48.159588 | orchestrator | 2026-03-26 05:17:48.159598 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-26 05:17:48.159607 | orchestrator | Thursday 26 March 2026 05:17:45 +0000 (0:00:01.660) 0:15:09.431 ******** 2026-03-26 05:17:48.159619 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-26 05:17:48.159632 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-26 05:17:48.159642 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-26 05:17:48.159651 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:17:48.159661 | orchestrator | 2026-03-26 05:17:48.159670 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-26 05:17:48.159680 | orchestrator | Thursday 26 March 2026 05:17:46 +0000 (0:00:01.154) 0:15:10.586 ******** 2026-03-26 05:17:48.159692 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': 'de9c3b4c4c57', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-26 05:17:40.373487', 'end': '2026-03-26 05:17:40.412891', 'delta': '0:00:00.039404', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['de9c3b4c4c57'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-26 05:17:48.159705 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': 'd66b87272f8e', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-26 05:17:40.900551', 'end': '2026-03-26 05:17:40.944725', 'delta': '0:00:00.044174', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['d66b87272f8e'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-26 05:17:48.159734 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': '2a382ea60872', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-26 05:17:41.454921', 'end': '2026-03-26 05:17:41.516677', 'delta': '0:00:00.061756', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['2a382ea60872'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-26 05:18:06.810242 | orchestrator | 2026-03-26 05:18:06.810434 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-26 05:18:06.810462 | orchestrator | Thursday 26 March 2026 05:17:48 +0000 (0:00:01.214) 0:15:11.801 ******** 2026-03-26 05:18:06.810479 | orchestrator | ok: [testbed-node-2] 2026-03-26 05:18:06.810491 | orchestrator | 2026-03-26 05:18:06.810503 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-26 05:18:06.810514 | orchestrator | Thursday 26 March 2026 05:17:49 +0000 (0:00:01.310) 0:15:13.111 ******** 2026-03-26 05:18:06.810525 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:18:06.810537 | orchestrator | 2026-03-26 05:18:06.810548 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-26 05:18:06.810559 | orchestrator | Thursday 26 March 2026 05:17:50 +0000 (0:00:01.285) 0:15:14.397 ******** 2026-03-26 05:18:06.810569 | orchestrator | ok: [testbed-node-2] 2026-03-26 05:18:06.810580 | orchestrator | 2026-03-26 05:18:06.810592 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-26 05:18:06.810603 | orchestrator | Thursday 26 March 2026 05:17:51 +0000 (0:00:01.202) 0:15:15.600 ******** 2026-03-26 05:18:06.810613 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-03-26 05:18:06.810624 | orchestrator | 2026-03-26 05:18:06.810635 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-26 05:18:06.810646 | orchestrator | Thursday 26 March 2026 05:17:53 +0000 (0:00:02.005) 0:15:17.605 ******** 2026-03-26 05:18:06.810657 | orchestrator | ok: [testbed-node-2] 2026-03-26 05:18:06.810667 | orchestrator | 2026-03-26 05:18:06.810678 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-26 05:18:06.810689 | orchestrator | Thursday 26 March 2026 05:17:55 +0000 (0:00:01.157) 0:15:18.763 ******** 2026-03-26 05:18:06.810700 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:18:06.810711 | orchestrator | 2026-03-26 05:18:06.810721 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-26 05:18:06.810732 | orchestrator | Thursday 26 March 2026 05:17:56 +0000 (0:00:01.150) 0:15:19.914 ******** 2026-03-26 05:18:06.810743 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:18:06.810755 | orchestrator | 2026-03-26 05:18:06.810766 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-26 05:18:06.810777 | orchestrator | Thursday 26 March 2026 05:17:57 +0000 (0:00:01.207) 0:15:21.122 ******** 2026-03-26 05:18:06.810788 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:18:06.810798 | orchestrator | 2026-03-26 05:18:06.810809 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-26 05:18:06.810820 | orchestrator | Thursday 26 March 2026 05:17:58 +0000 (0:00:01.132) 0:15:22.254 ******** 2026-03-26 05:18:06.810831 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:18:06.810842 | orchestrator | 2026-03-26 05:18:06.810875 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-26 05:18:06.810886 | orchestrator | Thursday 26 March 2026 05:17:59 +0000 (0:00:01.178) 0:15:23.433 ******** 2026-03-26 05:18:06.810897 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:18:06.810908 | orchestrator | 2026-03-26 05:18:06.810919 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-26 05:18:06.810930 | orchestrator | Thursday 26 March 2026 05:18:00 +0000 (0:00:01.137) 0:15:24.571 ******** 2026-03-26 05:18:06.810941 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:18:06.810951 | orchestrator | 2026-03-26 05:18:06.810962 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-26 05:18:06.810973 | orchestrator | Thursday 26 March 2026 05:18:02 +0000 (0:00:01.158) 0:15:25.729 ******** 2026-03-26 05:18:06.810984 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:18:06.810995 | orchestrator | 2026-03-26 05:18:06.811005 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-26 05:18:06.811016 | orchestrator | Thursday 26 March 2026 05:18:03 +0000 (0:00:01.142) 0:15:26.872 ******** 2026-03-26 05:18:06.811027 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:18:06.811038 | orchestrator | 2026-03-26 05:18:06.811049 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-26 05:18:06.811060 | orchestrator | Thursday 26 March 2026 05:18:04 +0000 (0:00:01.146) 0:15:28.019 ******** 2026-03-26 05:18:06.811071 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:18:06.811082 | orchestrator | 2026-03-26 05:18:06.811092 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-26 05:18:06.811103 | orchestrator | Thursday 26 March 2026 05:18:05 +0000 (0:00:01.140) 0:15:29.160 ******** 2026-03-26 05:18:06.811117 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:18:06.811147 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:18:06.811179 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:18:06.811193 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-26-01-38-10-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-26 05:18:06.811206 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:18:06.811226 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:18:06.811238 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:18:06.811267 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7634648a-b5a4-45bc-ac0b-8484a2642b22', 'scsi-SQEMU_QEMU_HARDDISK_7634648a-b5a4-45bc-ac0b-8484a2642b22'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '7634648a', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7634648a-b5a4-45bc-ac0b-8484a2642b22-part16', 'scsi-SQEMU_QEMU_HARDDISK_7634648a-b5a4-45bc-ac0b-8484a2642b22-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7634648a-b5a4-45bc-ac0b-8484a2642b22-part14', 'scsi-SQEMU_QEMU_HARDDISK_7634648a-b5a4-45bc-ac0b-8484a2642b22-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7634648a-b5a4-45bc-ac0b-8484a2642b22-part15', 'scsi-SQEMU_QEMU_HARDDISK_7634648a-b5a4-45bc-ac0b-8484a2642b22-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7634648a-b5a4-45bc-ac0b-8484a2642b22-part1', 'scsi-SQEMU_QEMU_HARDDISK_7634648a-b5a4-45bc-ac0b-8484a2642b22-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-26 05:18:08.033657 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:18:08.033734 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:18:08.033756 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:18:08.033763 | orchestrator | 2026-03-26 05:18:08.033768 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-26 05:18:08.033774 | orchestrator | Thursday 26 March 2026 05:18:06 +0000 (0:00:01.292) 0:15:30.452 ******** 2026-03-26 05:18:08.033781 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:18:08.033788 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:18:08.033793 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:18:08.033798 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-26-01-38-10-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:18:08.033825 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:18:08.033830 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:18:08.033838 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:18:08.033847 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7634648a-b5a4-45bc-ac0b-8484a2642b22', 'scsi-SQEMU_QEMU_HARDDISK_7634648a-b5a4-45bc-ac0b-8484a2642b22'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '7634648a', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7634648a-b5a4-45bc-ac0b-8484a2642b22-part16', 'scsi-SQEMU_QEMU_HARDDISK_7634648a-b5a4-45bc-ac0b-8484a2642b22-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7634648a-b5a4-45bc-ac0b-8484a2642b22-part14', 'scsi-SQEMU_QEMU_HARDDISK_7634648a-b5a4-45bc-ac0b-8484a2642b22-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7634648a-b5a4-45bc-ac0b-8484a2642b22-part15', 'scsi-SQEMU_QEMU_HARDDISK_7634648a-b5a4-45bc-ac0b-8484a2642b22-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7634648a-b5a4-45bc-ac0b-8484a2642b22-part1', 'scsi-SQEMU_QEMU_HARDDISK_7634648a-b5a4-45bc-ac0b-8484a2642b22-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:18:08.033858 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:18:43.130603 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:18:43.130714 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:18:43.130726 | orchestrator | 2026-03-26 05:18:43.130734 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-26 05:18:43.130741 | orchestrator | Thursday 26 March 2026 05:18:08 +0000 (0:00:01.222) 0:15:31.675 ******** 2026-03-26 05:18:43.130748 | orchestrator | ok: [testbed-node-2] 2026-03-26 05:18:43.130755 | orchestrator | 2026-03-26 05:18:43.130761 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-26 05:18:43.130767 | orchestrator | Thursday 26 March 2026 05:18:09 +0000 (0:00:01.485) 0:15:33.164 ******** 2026-03-26 05:18:43.130773 | orchestrator | ok: [testbed-node-2] 2026-03-26 05:18:43.130780 | orchestrator | 2026-03-26 05:18:43.130786 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-26 05:18:43.130792 | orchestrator | Thursday 26 March 2026 05:18:10 +0000 (0:00:01.164) 0:15:34.328 ******** 2026-03-26 05:18:43.130798 | orchestrator | ok: [testbed-node-2] 2026-03-26 05:18:43.130804 | orchestrator | 2026-03-26 05:18:43.130810 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-26 05:18:43.130816 | orchestrator | Thursday 26 March 2026 05:18:12 +0000 (0:00:01.508) 0:15:35.837 ******** 2026-03-26 05:18:43.130822 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:18:43.130828 | orchestrator | 2026-03-26 05:18:43.130834 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-26 05:18:43.130840 | orchestrator | Thursday 26 March 2026 05:18:13 +0000 (0:00:01.109) 0:15:36.947 ******** 2026-03-26 05:18:43.130846 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:18:43.130852 | orchestrator | 2026-03-26 05:18:43.130858 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-26 05:18:43.130864 | orchestrator | Thursday 26 March 2026 05:18:14 +0000 (0:00:01.278) 0:15:38.225 ******** 2026-03-26 05:18:43.130870 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:18:43.130876 | orchestrator | 2026-03-26 05:18:43.130882 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-26 05:18:43.130889 | orchestrator | Thursday 26 March 2026 05:18:15 +0000 (0:00:01.176) 0:15:39.402 ******** 2026-03-26 05:18:43.130895 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-03-26 05:18:43.130902 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-03-26 05:18:43.130908 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-26 05:18:43.130914 | orchestrator | 2026-03-26 05:18:43.130920 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-26 05:18:43.130926 | orchestrator | Thursday 26 March 2026 05:18:17 +0000 (0:00:01.686) 0:15:41.088 ******** 2026-03-26 05:18:43.130933 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-26 05:18:43.130939 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-26 05:18:43.130944 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-26 05:18:43.130951 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:18:43.130957 | orchestrator | 2026-03-26 05:18:43.130963 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-26 05:18:43.130969 | orchestrator | Thursday 26 March 2026 05:18:18 +0000 (0:00:01.213) 0:15:42.302 ******** 2026-03-26 05:18:43.130975 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:18:43.130981 | orchestrator | 2026-03-26 05:18:43.130987 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-26 05:18:43.130993 | orchestrator | Thursday 26 March 2026 05:18:19 +0000 (0:00:01.119) 0:15:43.421 ******** 2026-03-26 05:18:43.131004 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-26 05:18:43.131011 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-26 05:18:43.131017 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-26 05:18:43.131023 | orchestrator | ok: [testbed-node-2 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-26 05:18:43.131029 | orchestrator | ok: [testbed-node-2 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-26 05:18:43.131046 | orchestrator | ok: [testbed-node-2 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-26 05:18:43.131052 | orchestrator | ok: [testbed-node-2 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-26 05:18:43.131058 | orchestrator | 2026-03-26 05:18:43.131064 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-26 05:18:43.131070 | orchestrator | Thursday 26 March 2026 05:18:21 +0000 (0:00:01.904) 0:15:45.326 ******** 2026-03-26 05:18:43.131076 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-26 05:18:43.131082 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-26 05:18:43.131088 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-26 05:18:43.131108 | orchestrator | ok: [testbed-node-2 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-26 05:18:43.131126 | orchestrator | ok: [testbed-node-2 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-26 05:18:43.131141 | orchestrator | ok: [testbed-node-2 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-26 05:18:43.131147 | orchestrator | ok: [testbed-node-2 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-26 05:18:43.131153 | orchestrator | 2026-03-26 05:18:43.131159 | orchestrator | TASK [Get ceph cluster status] ************************************************* 2026-03-26 05:18:43.131167 | orchestrator | Thursday 26 March 2026 05:18:23 +0000 (0:00:02.305) 0:15:47.632 ******** 2026-03-26 05:18:43.131173 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:18:43.131180 | orchestrator | 2026-03-26 05:18:43.131187 | orchestrator | TASK [Display ceph health detail] ********************************************** 2026-03-26 05:18:43.131194 | orchestrator | Thursday 26 March 2026 05:18:24 +0000 (0:00:00.919) 0:15:48.551 ******** 2026-03-26 05:18:43.131201 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:18:43.131208 | orchestrator | 2026-03-26 05:18:43.131215 | orchestrator | TASK [Fail if cluster isn't in an acceptable state] **************************** 2026-03-26 05:18:43.131221 | orchestrator | Thursday 26 March 2026 05:18:25 +0000 (0:00:00.932) 0:15:49.484 ******** 2026-03-26 05:18:43.131228 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:18:43.131235 | orchestrator | 2026-03-26 05:18:43.131241 | orchestrator | TASK [Get the ceph quorum status] ********************************************** 2026-03-26 05:18:43.131248 | orchestrator | Thursday 26 March 2026 05:18:26 +0000 (0:00:00.789) 0:15:50.274 ******** 2026-03-26 05:18:43.131255 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:18:43.131262 | orchestrator | 2026-03-26 05:18:43.131269 | orchestrator | TASK [Fail if the cluster quorum isn't in an acceptable state] ***************** 2026-03-26 05:18:43.131276 | orchestrator | Thursday 26 March 2026 05:18:27 +0000 (0:00:00.916) 0:15:51.191 ******** 2026-03-26 05:18:43.131284 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:18:43.131291 | orchestrator | 2026-03-26 05:18:43.131298 | orchestrator | TASK [Ensure /var/lib/ceph/bootstrap-rbd-mirror is present] ******************** 2026-03-26 05:18:43.131305 | orchestrator | Thursday 26 March 2026 05:18:28 +0000 (0:00:00.769) 0:15:51.961 ******** 2026-03-26 05:18:43.131312 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-26 05:18:43.131318 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-26 05:18:43.131324 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-26 05:18:43.131330 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:18:43.131336 | orchestrator | 2026-03-26 05:18:43.131360 | orchestrator | TASK [Create potentially missing keys (rbd and rbd-mirror)] ******************** 2026-03-26 05:18:43.131374 | orchestrator | Thursday 26 March 2026 05:18:29 +0000 (0:00:01.408) 0:15:53.369 ******** 2026-03-26 05:18:43.131380 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd', 'testbed-node-0'])  2026-03-26 05:18:43.131386 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd', 'testbed-node-1'])  2026-03-26 05:18:43.131392 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd', 'testbed-node-2'])  2026-03-26 05:18:43.131398 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd-mirror', 'testbed-node-0'])  2026-03-26 05:18:43.131404 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd-mirror', 'testbed-node-1'])  2026-03-26 05:18:43.131410 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd-mirror', 'testbed-node-2'])  2026-03-26 05:18:43.131416 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:18:43.131422 | orchestrator | 2026-03-26 05:18:43.131428 | orchestrator | TASK [Stop ceph mon] *********************************************************** 2026-03-26 05:18:43.131434 | orchestrator | Thursday 26 March 2026 05:18:31 +0000 (0:00:01.623) 0:15:54.993 ******** 2026-03-26 05:18:43.131440 | orchestrator | changed: [testbed-node-2] => (item=testbed-node-2) 2026-03-26 05:18:43.131446 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-26 05:18:43.131452 | orchestrator | 2026-03-26 05:18:43.131458 | orchestrator | TASK [Mask the mgr service] **************************************************** 2026-03-26 05:18:43.131464 | orchestrator | Thursday 26 March 2026 05:18:34 +0000 (0:00:03.431) 0:15:58.424 ******** 2026-03-26 05:18:43.131471 | orchestrator | changed: [testbed-node-2] 2026-03-26 05:18:43.131477 | orchestrator | 2026-03-26 05:18:43.131483 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-26 05:18:43.131489 | orchestrator | Thursday 26 March 2026 05:18:37 +0000 (0:00:02.252) 0:16:00.677 ******** 2026-03-26 05:18:43.131495 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-2 2026-03-26 05:18:43.131502 | orchestrator | 2026-03-26 05:18:43.131508 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-26 05:18:43.131514 | orchestrator | Thursday 26 March 2026 05:18:38 +0000 (0:00:01.081) 0:16:01.758 ******** 2026-03-26 05:18:43.131521 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-2 2026-03-26 05:18:43.131527 | orchestrator | 2026-03-26 05:18:43.131536 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-26 05:18:43.131543 | orchestrator | Thursday 26 March 2026 05:18:39 +0000 (0:00:01.146) 0:16:02.904 ******** 2026-03-26 05:18:43.131549 | orchestrator | ok: [testbed-node-2] 2026-03-26 05:18:43.131555 | orchestrator | 2026-03-26 05:18:43.131561 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-26 05:18:43.131567 | orchestrator | Thursday 26 March 2026 05:18:40 +0000 (0:00:01.594) 0:16:04.499 ******** 2026-03-26 05:18:43.131573 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:18:43.131579 | orchestrator | 2026-03-26 05:18:43.131585 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-26 05:18:43.131591 | orchestrator | Thursday 26 March 2026 05:18:41 +0000 (0:00:01.137) 0:16:05.636 ******** 2026-03-26 05:18:43.131597 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:18:43.131603 | orchestrator | 2026-03-26 05:18:43.131610 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-26 05:18:43.131619 | orchestrator | Thursday 26 March 2026 05:18:43 +0000 (0:00:01.137) 0:16:06.774 ******** 2026-03-26 05:19:24.658202 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:19:24.658304 | orchestrator | 2026-03-26 05:19:24.658318 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-26 05:19:24.658327 | orchestrator | Thursday 26 March 2026 05:18:44 +0000 (0:00:01.107) 0:16:07.881 ******** 2026-03-26 05:19:24.658335 | orchestrator | ok: [testbed-node-2] 2026-03-26 05:19:24.658343 | orchestrator | 2026-03-26 05:19:24.658351 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-26 05:19:24.658410 | orchestrator | Thursday 26 March 2026 05:18:45 +0000 (0:00:01.586) 0:16:09.468 ******** 2026-03-26 05:19:24.658424 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:19:24.658436 | orchestrator | 2026-03-26 05:19:24.658448 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-26 05:19:24.658460 | orchestrator | Thursday 26 March 2026 05:18:46 +0000 (0:00:01.119) 0:16:10.588 ******** 2026-03-26 05:19:24.658478 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:19:24.658492 | orchestrator | 2026-03-26 05:19:24.658505 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-26 05:19:24.658517 | orchestrator | Thursday 26 March 2026 05:18:48 +0000 (0:00:01.187) 0:16:11.776 ******** 2026-03-26 05:19:24.658530 | orchestrator | ok: [testbed-node-2] 2026-03-26 05:19:24.658542 | orchestrator | 2026-03-26 05:19:24.658555 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-26 05:19:24.658567 | orchestrator | Thursday 26 March 2026 05:18:49 +0000 (0:00:01.609) 0:16:13.386 ******** 2026-03-26 05:19:24.658580 | orchestrator | ok: [testbed-node-2] 2026-03-26 05:19:24.658589 | orchestrator | 2026-03-26 05:19:24.658596 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-26 05:19:24.658604 | orchestrator | Thursday 26 March 2026 05:18:51 +0000 (0:00:01.563) 0:16:14.950 ******** 2026-03-26 05:19:24.658611 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:19:24.658618 | orchestrator | 2026-03-26 05:19:24.658625 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-26 05:19:24.658632 | orchestrator | Thursday 26 March 2026 05:18:52 +0000 (0:00:00.777) 0:16:15.728 ******** 2026-03-26 05:19:24.658639 | orchestrator | ok: [testbed-node-2] 2026-03-26 05:19:24.658646 | orchestrator | 2026-03-26 05:19:24.658654 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-26 05:19:24.658661 | orchestrator | Thursday 26 March 2026 05:18:52 +0000 (0:00:00.767) 0:16:16.496 ******** 2026-03-26 05:19:24.658668 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:19:24.658675 | orchestrator | 2026-03-26 05:19:24.658682 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-26 05:19:24.658690 | orchestrator | Thursday 26 March 2026 05:18:53 +0000 (0:00:00.800) 0:16:17.296 ******** 2026-03-26 05:19:24.658697 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:19:24.658704 | orchestrator | 2026-03-26 05:19:24.658711 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-26 05:19:24.658718 | orchestrator | Thursday 26 March 2026 05:18:54 +0000 (0:00:00.802) 0:16:18.098 ******** 2026-03-26 05:19:24.658725 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:19:24.658732 | orchestrator | 2026-03-26 05:19:24.658739 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-26 05:19:24.658746 | orchestrator | Thursday 26 March 2026 05:18:55 +0000 (0:00:00.789) 0:16:18.888 ******** 2026-03-26 05:19:24.658753 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:19:24.658760 | orchestrator | 2026-03-26 05:19:24.658767 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-26 05:19:24.658774 | orchestrator | Thursday 26 March 2026 05:18:56 +0000 (0:00:00.775) 0:16:19.664 ******** 2026-03-26 05:19:24.658781 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:19:24.658789 | orchestrator | 2026-03-26 05:19:24.658795 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-26 05:19:24.658803 | orchestrator | Thursday 26 March 2026 05:18:56 +0000 (0:00:00.779) 0:16:20.443 ******** 2026-03-26 05:19:24.658810 | orchestrator | ok: [testbed-node-2] 2026-03-26 05:19:24.658817 | orchestrator | 2026-03-26 05:19:24.658824 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-26 05:19:24.658831 | orchestrator | Thursday 26 March 2026 05:18:57 +0000 (0:00:00.810) 0:16:21.254 ******** 2026-03-26 05:19:24.658838 | orchestrator | ok: [testbed-node-2] 2026-03-26 05:19:24.658845 | orchestrator | 2026-03-26 05:19:24.658852 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-26 05:19:24.658867 | orchestrator | Thursday 26 March 2026 05:18:58 +0000 (0:00:00.800) 0:16:22.054 ******** 2026-03-26 05:19:24.658874 | orchestrator | ok: [testbed-node-2] 2026-03-26 05:19:24.658882 | orchestrator | 2026-03-26 05:19:24.658889 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-03-26 05:19:24.658896 | orchestrator | Thursday 26 March 2026 05:18:59 +0000 (0:00:00.792) 0:16:22.846 ******** 2026-03-26 05:19:24.658903 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:19:24.658910 | orchestrator | 2026-03-26 05:19:24.658917 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-03-26 05:19:24.658924 | orchestrator | Thursday 26 March 2026 05:19:00 +0000 (0:00:00.822) 0:16:23.669 ******** 2026-03-26 05:19:24.658944 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:19:24.658951 | orchestrator | 2026-03-26 05:19:24.658959 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-03-26 05:19:24.658966 | orchestrator | Thursday 26 March 2026 05:19:00 +0000 (0:00:00.769) 0:16:24.439 ******** 2026-03-26 05:19:24.658973 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:19:24.658980 | orchestrator | 2026-03-26 05:19:24.658987 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-03-26 05:19:24.658994 | orchestrator | Thursday 26 March 2026 05:19:01 +0000 (0:00:00.769) 0:16:25.208 ******** 2026-03-26 05:19:24.659001 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:19:24.659008 | orchestrator | 2026-03-26 05:19:24.659015 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-03-26 05:19:24.659022 | orchestrator | Thursday 26 March 2026 05:19:02 +0000 (0:00:00.799) 0:16:26.008 ******** 2026-03-26 05:19:24.659030 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:19:24.659037 | orchestrator | 2026-03-26 05:19:24.659059 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-03-26 05:19:24.659066 | orchestrator | Thursday 26 March 2026 05:19:03 +0000 (0:00:00.755) 0:16:26.763 ******** 2026-03-26 05:19:24.659074 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:19:24.659081 | orchestrator | 2026-03-26 05:19:24.659088 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-03-26 05:19:24.659095 | orchestrator | Thursday 26 March 2026 05:19:03 +0000 (0:00:00.746) 0:16:27.510 ******** 2026-03-26 05:19:24.659102 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:19:24.659109 | orchestrator | 2026-03-26 05:19:24.659117 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-03-26 05:19:24.659130 | orchestrator | Thursday 26 March 2026 05:19:04 +0000 (0:00:00.769) 0:16:28.279 ******** 2026-03-26 05:19:24.659142 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:19:24.659154 | orchestrator | 2026-03-26 05:19:24.659165 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-03-26 05:19:24.659177 | orchestrator | Thursday 26 March 2026 05:19:05 +0000 (0:00:00.778) 0:16:29.058 ******** 2026-03-26 05:19:24.659189 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:19:24.659200 | orchestrator | 2026-03-26 05:19:24.659212 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-03-26 05:19:24.659225 | orchestrator | Thursday 26 March 2026 05:19:06 +0000 (0:00:00.765) 0:16:29.824 ******** 2026-03-26 05:19:24.659237 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:19:24.659250 | orchestrator | 2026-03-26 05:19:24.659258 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-03-26 05:19:24.659265 | orchestrator | Thursday 26 March 2026 05:19:06 +0000 (0:00:00.758) 0:16:30.583 ******** 2026-03-26 05:19:24.659272 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:19:24.659279 | orchestrator | 2026-03-26 05:19:24.659286 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-03-26 05:19:24.659293 | orchestrator | Thursday 26 March 2026 05:19:07 +0000 (0:00:00.757) 0:16:31.341 ******** 2026-03-26 05:19:24.659301 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:19:24.659308 | orchestrator | 2026-03-26 05:19:24.659322 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-26 05:19:24.659329 | orchestrator | Thursday 26 March 2026 05:19:08 +0000 (0:00:00.767) 0:16:32.109 ******** 2026-03-26 05:19:24.659336 | orchestrator | ok: [testbed-node-2] 2026-03-26 05:19:24.659343 | orchestrator | 2026-03-26 05:19:24.659350 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-26 05:19:24.659357 | orchestrator | Thursday 26 March 2026 05:19:10 +0000 (0:00:01.645) 0:16:33.755 ******** 2026-03-26 05:19:24.659365 | orchestrator | ok: [testbed-node-2] 2026-03-26 05:19:24.659372 | orchestrator | 2026-03-26 05:19:24.659405 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-26 05:19:24.659414 | orchestrator | Thursday 26 March 2026 05:19:12 +0000 (0:00:02.045) 0:16:35.800 ******** 2026-03-26 05:19:24.659421 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-2 2026-03-26 05:19:24.659429 | orchestrator | 2026-03-26 05:19:24.659437 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-26 05:19:24.659444 | orchestrator | Thursday 26 March 2026 05:19:13 +0000 (0:00:01.119) 0:16:36.920 ******** 2026-03-26 05:19:24.659451 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:19:24.659458 | orchestrator | 2026-03-26 05:19:24.659465 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-26 05:19:24.659472 | orchestrator | Thursday 26 March 2026 05:19:14 +0000 (0:00:01.130) 0:16:38.051 ******** 2026-03-26 05:19:24.659480 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:19:24.659487 | orchestrator | 2026-03-26 05:19:24.659494 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-26 05:19:24.659501 | orchestrator | Thursday 26 March 2026 05:19:15 +0000 (0:00:01.163) 0:16:39.215 ******** 2026-03-26 05:19:24.659508 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-26 05:19:24.659516 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-26 05:19:24.659523 | orchestrator | 2026-03-26 05:19:24.659531 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-26 05:19:24.659543 | orchestrator | Thursday 26 March 2026 05:19:17 +0000 (0:00:01.897) 0:16:41.112 ******** 2026-03-26 05:19:24.659556 | orchestrator | ok: [testbed-node-2] 2026-03-26 05:19:24.659569 | orchestrator | 2026-03-26 05:19:24.659582 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-26 05:19:24.659594 | orchestrator | Thursday 26 March 2026 05:19:18 +0000 (0:00:01.455) 0:16:42.568 ******** 2026-03-26 05:19:24.659606 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:19:24.659618 | orchestrator | 2026-03-26 05:19:24.659632 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-26 05:19:24.659645 | orchestrator | Thursday 26 March 2026 05:19:20 +0000 (0:00:01.125) 0:16:43.694 ******** 2026-03-26 05:19:24.659659 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:19:24.659672 | orchestrator | 2026-03-26 05:19:24.659689 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-26 05:19:24.659697 | orchestrator | Thursday 26 March 2026 05:19:20 +0000 (0:00:00.803) 0:16:44.497 ******** 2026-03-26 05:19:24.659704 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:19:24.659711 | orchestrator | 2026-03-26 05:19:24.659718 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-26 05:19:24.659726 | orchestrator | Thursday 26 March 2026 05:19:21 +0000 (0:00:00.774) 0:16:45.272 ******** 2026-03-26 05:19:24.659733 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-2 2026-03-26 05:19:24.659740 | orchestrator | 2026-03-26 05:19:24.659747 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-26 05:19:24.659754 | orchestrator | Thursday 26 March 2026 05:19:22 +0000 (0:00:01.137) 0:16:46.410 ******** 2026-03-26 05:19:24.659761 | orchestrator | ok: [testbed-node-2] 2026-03-26 05:19:24.659768 | orchestrator | 2026-03-26 05:19:24.659776 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-26 05:19:24.659796 | orchestrator | Thursday 26 March 2026 05:19:24 +0000 (0:00:01.892) 0:16:48.302 ******** 2026-03-26 05:20:04.451671 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-26 05:20:04.451811 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-26 05:20:04.451836 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-26 05:20:04.451848 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:20:04.451861 | orchestrator | 2026-03-26 05:20:04.451873 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-26 05:20:04.451884 | orchestrator | Thursday 26 March 2026 05:19:25 +0000 (0:00:01.145) 0:16:49.448 ******** 2026-03-26 05:20:04.451895 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:20:04.451906 | orchestrator | 2026-03-26 05:20:04.451917 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-26 05:20:04.451927 | orchestrator | Thursday 26 March 2026 05:19:26 +0000 (0:00:01.136) 0:16:50.584 ******** 2026-03-26 05:20:04.451938 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:20:04.451949 | orchestrator | 2026-03-26 05:20:04.451960 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-26 05:20:04.451971 | orchestrator | Thursday 26 March 2026 05:19:28 +0000 (0:00:01.197) 0:16:51.782 ******** 2026-03-26 05:20:04.451982 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:20:04.451992 | orchestrator | 2026-03-26 05:20:04.452003 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-26 05:20:04.452014 | orchestrator | Thursday 26 March 2026 05:19:29 +0000 (0:00:01.157) 0:16:52.939 ******** 2026-03-26 05:20:04.452024 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:20:04.452035 | orchestrator | 2026-03-26 05:20:04.452046 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-26 05:20:04.452057 | orchestrator | Thursday 26 March 2026 05:19:30 +0000 (0:00:01.179) 0:16:54.118 ******** 2026-03-26 05:20:04.452068 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:20:04.452078 | orchestrator | 2026-03-26 05:20:04.452089 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-26 05:20:04.452100 | orchestrator | Thursday 26 March 2026 05:19:31 +0000 (0:00:00.818) 0:16:54.937 ******** 2026-03-26 05:20:04.452111 | orchestrator | ok: [testbed-node-2] 2026-03-26 05:20:04.452123 | orchestrator | 2026-03-26 05:20:04.452134 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-26 05:20:04.452145 | orchestrator | Thursday 26 March 2026 05:19:33 +0000 (0:00:02.279) 0:16:57.216 ******** 2026-03-26 05:20:04.452156 | orchestrator | ok: [testbed-node-2] 2026-03-26 05:20:04.452166 | orchestrator | 2026-03-26 05:20:04.452177 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-26 05:20:04.452188 | orchestrator | Thursday 26 March 2026 05:19:34 +0000 (0:00:00.763) 0:16:57.980 ******** 2026-03-26 05:20:04.452202 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-2 2026-03-26 05:20:04.452215 | orchestrator | 2026-03-26 05:20:04.452227 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-26 05:20:04.452239 | orchestrator | Thursday 26 March 2026 05:19:35 +0000 (0:00:01.138) 0:16:59.118 ******** 2026-03-26 05:20:04.452251 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:20:04.452264 | orchestrator | 2026-03-26 05:20:04.452277 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-26 05:20:04.452290 | orchestrator | Thursday 26 March 2026 05:19:36 +0000 (0:00:01.132) 0:17:00.251 ******** 2026-03-26 05:20:04.452303 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:20:04.452315 | orchestrator | 2026-03-26 05:20:04.452327 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-26 05:20:04.452338 | orchestrator | Thursday 26 March 2026 05:19:37 +0000 (0:00:01.140) 0:17:01.391 ******** 2026-03-26 05:20:04.452349 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:20:04.452383 | orchestrator | 2026-03-26 05:20:04.452395 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-26 05:20:04.452405 | orchestrator | Thursday 26 March 2026 05:19:38 +0000 (0:00:01.184) 0:17:02.576 ******** 2026-03-26 05:20:04.452496 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:20:04.452516 | orchestrator | 2026-03-26 05:20:04.452528 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-26 05:20:04.452539 | orchestrator | Thursday 26 March 2026 05:19:40 +0000 (0:00:01.154) 0:17:03.730 ******** 2026-03-26 05:20:04.452549 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:20:04.452560 | orchestrator | 2026-03-26 05:20:04.452570 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-26 05:20:04.452581 | orchestrator | Thursday 26 March 2026 05:19:41 +0000 (0:00:01.219) 0:17:04.950 ******** 2026-03-26 05:20:04.452592 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:20:04.452602 | orchestrator | 2026-03-26 05:20:04.452613 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-26 05:20:04.452624 | orchestrator | Thursday 26 March 2026 05:19:42 +0000 (0:00:01.154) 0:17:06.105 ******** 2026-03-26 05:20:04.452649 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:20:04.452660 | orchestrator | 2026-03-26 05:20:04.452671 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-26 05:20:04.452682 | orchestrator | Thursday 26 March 2026 05:19:43 +0000 (0:00:01.176) 0:17:07.281 ******** 2026-03-26 05:20:04.452692 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:20:04.452703 | orchestrator | 2026-03-26 05:20:04.452713 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-26 05:20:04.452724 | orchestrator | Thursday 26 March 2026 05:19:44 +0000 (0:00:01.206) 0:17:08.487 ******** 2026-03-26 05:20:04.452735 | orchestrator | ok: [testbed-node-2] 2026-03-26 05:20:04.452745 | orchestrator | 2026-03-26 05:20:04.452757 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-26 05:20:04.452767 | orchestrator | Thursday 26 March 2026 05:19:45 +0000 (0:00:00.849) 0:17:09.337 ******** 2026-03-26 05:20:04.452778 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-2 2026-03-26 05:20:04.452790 | orchestrator | 2026-03-26 05:20:04.452801 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-26 05:20:04.452829 | orchestrator | Thursday 26 March 2026 05:19:46 +0000 (0:00:01.103) 0:17:10.441 ******** 2026-03-26 05:20:04.452841 | orchestrator | ok: [testbed-node-2] => (item=/etc/ceph) 2026-03-26 05:20:04.452852 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/) 2026-03-26 05:20:04.452863 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-03-26 05:20:04.452874 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-03-26 05:20:04.452884 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-03-26 05:20:04.452895 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-03-26 05:20:04.452907 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-03-26 05:20:04.452926 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-03-26 05:20:04.452941 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-26 05:20:04.452969 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-26 05:20:04.452990 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-26 05:20:04.453007 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-26 05:20:04.453024 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-26 05:20:04.453041 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-26 05:20:04.453060 | orchestrator | ok: [testbed-node-2] => (item=/var/run/ceph) 2026-03-26 05:20:04.453077 | orchestrator | ok: [testbed-node-2] => (item=/var/log/ceph) 2026-03-26 05:20:04.453096 | orchestrator | 2026-03-26 05:20:04.453115 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-26 05:20:04.453149 | orchestrator | Thursday 26 March 2026 05:19:53 +0000 (0:00:06.354) 0:17:16.795 ******** 2026-03-26 05:20:04.453164 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:20:04.453175 | orchestrator | 2026-03-26 05:20:04.453186 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-26 05:20:04.453197 | orchestrator | Thursday 26 March 2026 05:19:53 +0000 (0:00:00.763) 0:17:17.558 ******** 2026-03-26 05:20:04.453207 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:20:04.453218 | orchestrator | 2026-03-26 05:20:04.453228 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-26 05:20:04.453239 | orchestrator | Thursday 26 March 2026 05:19:54 +0000 (0:00:00.774) 0:17:18.333 ******** 2026-03-26 05:20:04.453258 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:20:04.453276 | orchestrator | 2026-03-26 05:20:04.453294 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-26 05:20:04.453312 | orchestrator | Thursday 26 March 2026 05:19:55 +0000 (0:00:00.800) 0:17:19.134 ******** 2026-03-26 05:20:04.453329 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:20:04.453347 | orchestrator | 2026-03-26 05:20:04.453363 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-26 05:20:04.453381 | orchestrator | Thursday 26 March 2026 05:19:56 +0000 (0:00:00.817) 0:17:19.952 ******** 2026-03-26 05:20:04.453398 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:20:04.453444 | orchestrator | 2026-03-26 05:20:04.453463 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-26 05:20:04.453481 | orchestrator | Thursday 26 March 2026 05:19:57 +0000 (0:00:00.834) 0:17:20.786 ******** 2026-03-26 05:20:04.453501 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:20:04.453519 | orchestrator | 2026-03-26 05:20:04.453538 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-26 05:20:04.453550 | orchestrator | Thursday 26 March 2026 05:19:58 +0000 (0:00:00.894) 0:17:21.680 ******** 2026-03-26 05:20:04.453560 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:20:04.453571 | orchestrator | 2026-03-26 05:20:04.453582 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-26 05:20:04.453593 | orchestrator | Thursday 26 March 2026 05:19:58 +0000 (0:00:00.847) 0:17:22.528 ******** 2026-03-26 05:20:04.453603 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:20:04.453614 | orchestrator | 2026-03-26 05:20:04.453625 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-26 05:20:04.453636 | orchestrator | Thursday 26 March 2026 05:19:59 +0000 (0:00:00.773) 0:17:23.302 ******** 2026-03-26 05:20:04.453647 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:20:04.453657 | orchestrator | 2026-03-26 05:20:04.453668 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-26 05:20:04.453679 | orchestrator | Thursday 26 March 2026 05:20:00 +0000 (0:00:00.800) 0:17:24.102 ******** 2026-03-26 05:20:04.453689 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:20:04.453700 | orchestrator | 2026-03-26 05:20:04.453710 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-26 05:20:04.453729 | orchestrator | Thursday 26 March 2026 05:20:01 +0000 (0:00:00.767) 0:17:24.870 ******** 2026-03-26 05:20:04.453740 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:20:04.453751 | orchestrator | 2026-03-26 05:20:04.453761 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-26 05:20:04.453772 | orchestrator | Thursday 26 March 2026 05:20:01 +0000 (0:00:00.777) 0:17:25.648 ******** 2026-03-26 05:20:04.453783 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:20:04.453794 | orchestrator | 2026-03-26 05:20:04.453804 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-26 05:20:04.453815 | orchestrator | Thursday 26 March 2026 05:20:02 +0000 (0:00:00.786) 0:17:26.434 ******** 2026-03-26 05:20:04.453835 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:20:04.453846 | orchestrator | 2026-03-26 05:20:04.453856 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-26 05:20:04.453867 | orchestrator | Thursday 26 March 2026 05:20:03 +0000 (0:00:00.894) 0:17:27.328 ******** 2026-03-26 05:20:04.453878 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:20:04.453889 | orchestrator | 2026-03-26 05:20:04.453899 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-26 05:20:04.453921 | orchestrator | Thursday 26 March 2026 05:20:04 +0000 (0:00:00.770) 0:17:28.099 ******** 2026-03-26 05:20:51.935939 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:20:51.936058 | orchestrator | 2026-03-26 05:20:51.936075 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-26 05:20:51.936089 | orchestrator | Thursday 26 March 2026 05:20:05 +0000 (0:00:00.882) 0:17:28.981 ******** 2026-03-26 05:20:51.936101 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:20:51.936112 | orchestrator | 2026-03-26 05:20:51.936123 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-26 05:20:51.936135 | orchestrator | Thursday 26 March 2026 05:20:06 +0000 (0:00:00.758) 0:17:29.740 ******** 2026-03-26 05:20:51.936145 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:20:51.936156 | orchestrator | 2026-03-26 05:20:51.936168 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-26 05:20:51.936180 | orchestrator | Thursday 26 March 2026 05:20:06 +0000 (0:00:00.784) 0:17:30.525 ******** 2026-03-26 05:20:51.936190 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:20:51.936205 | orchestrator | 2026-03-26 05:20:51.936225 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-26 05:20:51.936243 | orchestrator | Thursday 26 March 2026 05:20:07 +0000 (0:00:00.743) 0:17:31.269 ******** 2026-03-26 05:20:51.936263 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:20:51.936282 | orchestrator | 2026-03-26 05:20:51.936301 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-26 05:20:51.936322 | orchestrator | Thursday 26 March 2026 05:20:08 +0000 (0:00:00.780) 0:17:32.049 ******** 2026-03-26 05:20:51.936342 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:20:51.936363 | orchestrator | 2026-03-26 05:20:51.936382 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-26 05:20:51.936402 | orchestrator | Thursday 26 March 2026 05:20:09 +0000 (0:00:00.762) 0:17:32.812 ******** 2026-03-26 05:20:51.936423 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:20:51.936464 | orchestrator | 2026-03-26 05:20:51.936487 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-26 05:20:51.936508 | orchestrator | Thursday 26 March 2026 05:20:09 +0000 (0:00:00.748) 0:17:33.560 ******** 2026-03-26 05:20:51.936529 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-03-26 05:20:51.936550 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-03-26 05:20:51.936572 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-03-26 05:20:51.936592 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:20:51.936612 | orchestrator | 2026-03-26 05:20:51.936627 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-26 05:20:51.936639 | orchestrator | Thursday 26 March 2026 05:20:10 +0000 (0:00:01.070) 0:17:34.631 ******** 2026-03-26 05:20:51.936652 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-03-26 05:20:51.936665 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-03-26 05:20:51.936677 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-03-26 05:20:51.936689 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:20:51.936701 | orchestrator | 2026-03-26 05:20:51.936713 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-26 05:20:51.936726 | orchestrator | Thursday 26 March 2026 05:20:12 +0000 (0:00:01.039) 0:17:35.670 ******** 2026-03-26 05:20:51.936763 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-03-26 05:20:51.936776 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-03-26 05:20:51.936789 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-03-26 05:20:51.936801 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:20:51.936814 | orchestrator | 2026-03-26 05:20:51.936826 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-26 05:20:51.936837 | orchestrator | Thursday 26 March 2026 05:20:13 +0000 (0:00:01.016) 0:17:36.687 ******** 2026-03-26 05:20:51.936847 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:20:51.936858 | orchestrator | 2026-03-26 05:20:51.936868 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-26 05:20:51.936879 | orchestrator | Thursday 26 March 2026 05:20:13 +0000 (0:00:00.772) 0:17:37.459 ******** 2026-03-26 05:20:51.936890 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-03-26 05:20:51.936901 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:20:51.936912 | orchestrator | 2026-03-26 05:20:51.936923 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-26 05:20:51.936934 | orchestrator | Thursday 26 March 2026 05:20:14 +0000 (0:00:00.929) 0:17:38.389 ******** 2026-03-26 05:20:51.936944 | orchestrator | changed: [testbed-node-2] 2026-03-26 05:20:51.936955 | orchestrator | 2026-03-26 05:20:51.936965 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-03-26 05:20:51.936992 | orchestrator | Thursday 26 March 2026 05:20:16 +0000 (0:00:01.440) 0:17:39.830 ******** 2026-03-26 05:20:51.937002 | orchestrator | ok: [testbed-node-2] 2026-03-26 05:20:51.937013 | orchestrator | 2026-03-26 05:20:51.937024 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-03-26 05:20:51.937034 | orchestrator | Thursday 26 March 2026 05:20:17 +0000 (0:00:00.829) 0:17:40.660 ******** 2026-03-26 05:20:51.937045 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-2 2026-03-26 05:20:51.937056 | orchestrator | 2026-03-26 05:20:51.937067 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-03-26 05:20:51.937077 | orchestrator | Thursday 26 March 2026 05:20:18 +0000 (0:00:01.164) 0:17:41.824 ******** 2026-03-26 05:20:51.937088 | orchestrator | ok: [testbed-node-2] 2026-03-26 05:20:51.937098 | orchestrator | 2026-03-26 05:20:51.937109 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-03-26 05:20:51.937120 | orchestrator | Thursday 26 March 2026 05:20:21 +0000 (0:00:03.209) 0:17:45.033 ******** 2026-03-26 05:20:51.937130 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:20:51.937141 | orchestrator | 2026-03-26 05:20:51.937151 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-03-26 05:20:51.937182 | orchestrator | Thursday 26 March 2026 05:20:22 +0000 (0:00:01.188) 0:17:46.222 ******** 2026-03-26 05:20:51.937194 | orchestrator | ok: [testbed-node-2] 2026-03-26 05:20:51.937205 | orchestrator | 2026-03-26 05:20:51.937215 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-03-26 05:20:51.937226 | orchestrator | Thursday 26 March 2026 05:20:23 +0000 (0:00:01.153) 0:17:47.375 ******** 2026-03-26 05:20:51.937236 | orchestrator | ok: [testbed-node-2] 2026-03-26 05:20:51.937247 | orchestrator | 2026-03-26 05:20:51.937257 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-03-26 05:20:51.937268 | orchestrator | Thursday 26 March 2026 05:20:24 +0000 (0:00:01.164) 0:17:48.540 ******** 2026-03-26 05:20:51.937279 | orchestrator | changed: [testbed-node-2] 2026-03-26 05:20:51.937289 | orchestrator | 2026-03-26 05:20:51.937300 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-03-26 05:20:51.937311 | orchestrator | Thursday 26 March 2026 05:20:26 +0000 (0:00:02.091) 0:17:50.631 ******** 2026-03-26 05:20:51.937321 | orchestrator | ok: [testbed-node-2] 2026-03-26 05:20:51.937332 | orchestrator | 2026-03-26 05:20:51.937343 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-03-26 05:20:51.937361 | orchestrator | Thursday 26 March 2026 05:20:28 +0000 (0:00:01.605) 0:17:52.237 ******** 2026-03-26 05:20:51.937372 | orchestrator | ok: [testbed-node-2] 2026-03-26 05:20:51.937382 | orchestrator | 2026-03-26 05:20:51.937393 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-03-26 05:20:51.937404 | orchestrator | Thursday 26 March 2026 05:20:30 +0000 (0:00:01.522) 0:17:53.759 ******** 2026-03-26 05:20:51.937414 | orchestrator | ok: [testbed-node-2] 2026-03-26 05:20:51.937425 | orchestrator | 2026-03-26 05:20:51.937436 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-03-26 05:20:51.937469 | orchestrator | Thursday 26 March 2026 05:20:31 +0000 (0:00:01.533) 0:17:55.293 ******** 2026-03-26 05:20:51.937481 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-03-26 05:20:51.937492 | orchestrator | 2026-03-26 05:20:51.937503 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-03-26 05:20:51.937513 | orchestrator | Thursday 26 March 2026 05:20:33 +0000 (0:00:01.584) 0:17:56.877 ******** 2026-03-26 05:20:51.937524 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-03-26 05:20:51.937534 | orchestrator | 2026-03-26 05:20:51.937545 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-03-26 05:20:51.937556 | orchestrator | Thursday 26 March 2026 05:20:34 +0000 (0:00:01.548) 0:17:58.425 ******** 2026-03-26 05:20:51.937566 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-26 05:20:51.937577 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-26 05:20:51.937588 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-03-26 05:20:51.937598 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2026-03-26 05:20:51.937609 | orchestrator | 2026-03-26 05:20:51.937620 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-03-26 05:20:51.937630 | orchestrator | Thursday 26 March 2026 05:20:38 +0000 (0:00:04.195) 0:18:02.621 ******** 2026-03-26 05:20:51.937641 | orchestrator | changed: [testbed-node-2] 2026-03-26 05:20:51.937651 | orchestrator | 2026-03-26 05:20:51.937662 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-03-26 05:20:51.937673 | orchestrator | Thursday 26 March 2026 05:20:41 +0000 (0:00:02.120) 0:18:04.742 ******** 2026-03-26 05:20:51.937683 | orchestrator | ok: [testbed-node-2] 2026-03-26 05:20:51.937694 | orchestrator | 2026-03-26 05:20:51.937705 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-03-26 05:20:51.937715 | orchestrator | Thursday 26 March 2026 05:20:42 +0000 (0:00:01.146) 0:18:05.888 ******** 2026-03-26 05:20:51.937726 | orchestrator | ok: [testbed-node-2] 2026-03-26 05:20:51.937736 | orchestrator | 2026-03-26 05:20:51.937747 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-03-26 05:20:51.937757 | orchestrator | Thursday 26 March 2026 05:20:43 +0000 (0:00:01.151) 0:18:07.039 ******** 2026-03-26 05:20:51.937768 | orchestrator | ok: [testbed-node-2] 2026-03-26 05:20:51.937779 | orchestrator | 2026-03-26 05:20:51.937789 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-03-26 05:20:51.937800 | orchestrator | Thursday 26 March 2026 05:20:45 +0000 (0:00:01.799) 0:18:08.838 ******** 2026-03-26 05:20:51.937810 | orchestrator | ok: [testbed-node-2] 2026-03-26 05:20:51.937821 | orchestrator | 2026-03-26 05:20:51.937832 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-03-26 05:20:51.937842 | orchestrator | Thursday 26 March 2026 05:20:46 +0000 (0:00:01.465) 0:18:10.304 ******** 2026-03-26 05:20:51.937853 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:20:51.937864 | orchestrator | 2026-03-26 05:20:51.937874 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-03-26 05:20:51.937890 | orchestrator | Thursday 26 March 2026 05:20:47 +0000 (0:00:00.760) 0:18:11.065 ******** 2026-03-26 05:20:51.937901 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-2 2026-03-26 05:20:51.937912 | orchestrator | 2026-03-26 05:20:51.937923 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-03-26 05:20:51.937940 | orchestrator | Thursday 26 March 2026 05:20:48 +0000 (0:00:01.123) 0:18:12.189 ******** 2026-03-26 05:20:51.937951 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:20:51.937962 | orchestrator | 2026-03-26 05:20:51.937972 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-03-26 05:20:51.937983 | orchestrator | Thursday 26 March 2026 05:20:49 +0000 (0:00:01.118) 0:18:13.308 ******** 2026-03-26 05:20:51.937993 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:20:51.938004 | orchestrator | 2026-03-26 05:20:51.938074 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-03-26 05:20:51.938089 | orchestrator | Thursday 26 March 2026 05:20:50 +0000 (0:00:01.176) 0:18:14.484 ******** 2026-03-26 05:20:51.938100 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-2 2026-03-26 05:20:51.938111 | orchestrator | 2026-03-26 05:20:51.938121 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-03-26 05:20:51.938141 | orchestrator | Thursday 26 March 2026 05:20:51 +0000 (0:00:01.095) 0:18:15.579 ******** 2026-03-26 05:22:01.315005 | orchestrator | ok: [testbed-node-2] 2026-03-26 05:22:01.315121 | orchestrator | 2026-03-26 05:22:01.315136 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-03-26 05:22:01.315148 | orchestrator | Thursday 26 March 2026 05:20:54 +0000 (0:00:02.713) 0:18:18.292 ******** 2026-03-26 05:22:01.315158 | orchestrator | ok: [testbed-node-2] 2026-03-26 05:22:01.315168 | orchestrator | 2026-03-26 05:22:01.315178 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-03-26 05:22:01.315188 | orchestrator | Thursday 26 March 2026 05:20:56 +0000 (0:00:02.065) 0:18:20.358 ******** 2026-03-26 05:22:01.315198 | orchestrator | ok: [testbed-node-2] 2026-03-26 05:22:01.315207 | orchestrator | 2026-03-26 05:22:01.315217 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-03-26 05:22:01.315227 | orchestrator | Thursday 26 March 2026 05:20:59 +0000 (0:00:02.468) 0:18:22.827 ******** 2026-03-26 05:22:01.315237 | orchestrator | changed: [testbed-node-2] 2026-03-26 05:22:01.315248 | orchestrator | 2026-03-26 05:22:01.315258 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-03-26 05:22:01.315268 | orchestrator | Thursday 26 March 2026 05:21:03 +0000 (0:00:04.019) 0:18:26.846 ******** 2026-03-26 05:22:01.315278 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-2 2026-03-26 05:22:01.315289 | orchestrator | 2026-03-26 05:22:01.315299 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-03-26 05:22:01.315309 | orchestrator | Thursday 26 March 2026 05:21:04 +0000 (0:00:01.151) 0:18:27.998 ******** 2026-03-26 05:22:01.315318 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-03-26 05:22:01.315328 | orchestrator | ok: [testbed-node-2] 2026-03-26 05:22:01.315338 | orchestrator | 2026-03-26 05:22:01.315347 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-03-26 05:22:01.315357 | orchestrator | Thursday 26 March 2026 05:21:27 +0000 (0:00:23.007) 0:18:51.006 ******** 2026-03-26 05:22:01.315366 | orchestrator | ok: [testbed-node-2] 2026-03-26 05:22:01.315376 | orchestrator | 2026-03-26 05:22:01.315386 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-03-26 05:22:01.315395 | orchestrator | Thursday 26 March 2026 05:21:30 +0000 (0:00:02.689) 0:18:53.695 ******** 2026-03-26 05:22:01.315405 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:22:01.315415 | orchestrator | 2026-03-26 05:22:01.315424 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-03-26 05:22:01.315434 | orchestrator | Thursday 26 March 2026 05:21:30 +0000 (0:00:00.801) 0:18:54.496 ******** 2026-03-26 05:22:01.315446 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__77be735a24b84ab013e2e181233ee69b9d9f8b69'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-03-26 05:22:01.315480 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__77be735a24b84ab013e2e181233ee69b9d9f8b69'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-03-26 05:22:01.315522 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__77be735a24b84ab013e2e181233ee69b9d9f8b69'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-03-26 05:22:01.315560 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__77be735a24b84ab013e2e181233ee69b9d9f8b69'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-03-26 05:22:01.315581 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__77be735a24b84ab013e2e181233ee69b9d9f8b69'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-03-26 05:22:01.315600 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__77be735a24b84ab013e2e181233ee69b9d9f8b69'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__77be735a24b84ab013e2e181233ee69b9d9f8b69'}])  2026-03-26 05:22:01.315616 | orchestrator | 2026-03-26 05:22:01.315645 | orchestrator | TASK [Start ceph mgr] ********************************************************** 2026-03-26 05:22:01.315657 | orchestrator | Thursday 26 March 2026 05:21:40 +0000 (0:00:09.632) 0:19:04.129 ******** 2026-03-26 05:22:01.315668 | orchestrator | changed: [testbed-node-2] 2026-03-26 05:22:01.315680 | orchestrator | 2026-03-26 05:22:01.315690 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-26 05:22:01.315701 | orchestrator | Thursday 26 March 2026 05:21:42 +0000 (0:00:02.183) 0:19:06.312 ******** 2026-03-26 05:22:01.315712 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-26 05:22:01.315722 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-1) 2026-03-26 05:22:01.315733 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-2) 2026-03-26 05:22:01.315744 | orchestrator | 2026-03-26 05:22:01.315755 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-26 05:22:01.315782 | orchestrator | Thursday 26 March 2026 05:21:44 +0000 (0:00:01.868) 0:19:08.181 ******** 2026-03-26 05:22:01.315793 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-26 05:22:01.315814 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-26 05:22:01.315825 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-26 05:22:01.315836 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:22:01.315847 | orchestrator | 2026-03-26 05:22:01.315858 | orchestrator | TASK [Non container | waiting for the monitor to join the quorum...] *********** 2026-03-26 05:22:01.315868 | orchestrator | Thursday 26 March 2026 05:21:45 +0000 (0:00:01.420) 0:19:09.602 ******** 2026-03-26 05:22:01.315877 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:22:01.315895 | orchestrator | 2026-03-26 05:22:01.315905 | orchestrator | TASK [Container | waiting for the containerized monitor to join the quorum...] *** 2026-03-26 05:22:01.315914 | orchestrator | Thursday 26 March 2026 05:21:46 +0000 (0:00:00.762) 0:19:10.364 ******** 2026-03-26 05:22:01.315924 | orchestrator | ok: [testbed-node-2] 2026-03-26 05:22:01.315933 | orchestrator | 2026-03-26 05:22:01.315943 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-26 05:22:01.315953 | orchestrator | Thursday 26 March 2026 05:21:48 +0000 (0:00:01.963) 0:19:12.328 ******** 2026-03-26 05:22:01.315962 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:22:01.315972 | orchestrator | 2026-03-26 05:22:01.315981 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-03-26 05:22:01.315991 | orchestrator | Thursday 26 March 2026 05:21:49 +0000 (0:00:00.825) 0:19:13.154 ******** 2026-03-26 05:22:01.316000 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:22:01.316010 | orchestrator | 2026-03-26 05:22:01.316019 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-03-26 05:22:01.316029 | orchestrator | Thursday 26 March 2026 05:21:50 +0000 (0:00:00.762) 0:19:13.916 ******** 2026-03-26 05:22:01.316038 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:22:01.316048 | orchestrator | 2026-03-26 05:22:01.316057 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-03-26 05:22:01.316067 | orchestrator | Thursday 26 March 2026 05:21:51 +0000 (0:00:00.773) 0:19:14.690 ******** 2026-03-26 05:22:01.316077 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:22:01.316087 | orchestrator | 2026-03-26 05:22:01.316098 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-03-26 05:22:01.316109 | orchestrator | Thursday 26 March 2026 05:21:51 +0000 (0:00:00.754) 0:19:15.444 ******** 2026-03-26 05:22:01.316119 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:22:01.316130 | orchestrator | 2026-03-26 05:22:01.316141 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-03-26 05:22:01.316151 | orchestrator | Thursday 26 March 2026 05:21:52 +0000 (0:00:00.764) 0:19:16.209 ******** 2026-03-26 05:22:01.316162 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:22:01.316172 | orchestrator | 2026-03-26 05:22:01.316183 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-03-26 05:22:01.316194 | orchestrator | Thursday 26 March 2026 05:21:53 +0000 (0:00:00.845) 0:19:17.055 ******** 2026-03-26 05:22:01.316204 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:22:01.316215 | orchestrator | 2026-03-26 05:22:01.316225 | orchestrator | PLAY [Reset mon_host] ********************************************************** 2026-03-26 05:22:01.316236 | orchestrator | 2026-03-26 05:22:01.316246 | orchestrator | TASK [Reset mon_host fact] ***************************************************** 2026-03-26 05:22:01.316257 | orchestrator | Thursday 26 March 2026 05:21:55 +0000 (0:00:01.781) 0:19:18.836 ******** 2026-03-26 05:22:01.316268 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:22:01.316279 | orchestrator | ok: [testbed-node-1] 2026-03-26 05:22:01.316289 | orchestrator | ok: [testbed-node-2] 2026-03-26 05:22:01.316300 | orchestrator | 2026-03-26 05:22:01.316316 | orchestrator | PLAY [Upgrade ceph mgr nodes when implicitly collocated on monitors] *********** 2026-03-26 05:22:01.316328 | orchestrator | 2026-03-26 05:22:01.316338 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-03-26 05:22:01.316349 | orchestrator | Thursday 26 March 2026 05:21:56 +0000 (0:00:01.460) 0:19:20.297 ******** 2026-03-26 05:22:01.316360 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:22:01.316370 | orchestrator | 2026-03-26 05:22:01.316381 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-26 05:22:01.316392 | orchestrator | Thursday 26 March 2026 05:21:57 +0000 (0:00:01.175) 0:19:21.473 ******** 2026-03-26 05:22:01.316402 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:22:01.316413 | orchestrator | 2026-03-26 05:22:01.316423 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-26 05:22:01.316434 | orchestrator | Thursday 26 March 2026 05:21:58 +0000 (0:00:01.170) 0:19:22.644 ******** 2026-03-26 05:22:01.316451 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:22:01.316462 | orchestrator | 2026-03-26 05:22:01.316472 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-26 05:22:01.316483 | orchestrator | Thursday 26 March 2026 05:22:00 +0000 (0:00:01.141) 0:19:23.785 ******** 2026-03-26 05:22:01.316521 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:22:01.316533 | orchestrator | 2026-03-26 05:22:01.316550 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-26 05:22:46.874910 | orchestrator | Thursday 26 March 2026 05:22:01 +0000 (0:00:01.174) 0:19:24.959 ******** 2026-03-26 05:22:46.875028 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:22:46.875045 | orchestrator | 2026-03-26 05:22:46.875059 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-26 05:22:46.875070 | orchestrator | Thursday 26 March 2026 05:22:02 +0000 (0:00:01.123) 0:19:26.083 ******** 2026-03-26 05:22:46.875081 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:22:46.875092 | orchestrator | 2026-03-26 05:22:46.875103 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-26 05:22:46.875114 | orchestrator | Thursday 26 March 2026 05:22:03 +0000 (0:00:01.125) 0:19:27.208 ******** 2026-03-26 05:22:46.875125 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:22:46.875136 | orchestrator | 2026-03-26 05:22:46.875146 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-26 05:22:46.875157 | orchestrator | Thursday 26 March 2026 05:22:04 +0000 (0:00:01.125) 0:19:28.334 ******** 2026-03-26 05:22:46.875168 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:22:46.875179 | orchestrator | 2026-03-26 05:22:46.875189 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-26 05:22:46.875200 | orchestrator | Thursday 26 March 2026 05:22:05 +0000 (0:00:01.134) 0:19:29.469 ******** 2026-03-26 05:22:46.875210 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:22:46.875221 | orchestrator | 2026-03-26 05:22:46.875238 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-26 05:22:46.875258 | orchestrator | Thursday 26 March 2026 05:22:06 +0000 (0:00:01.108) 0:19:30.578 ******** 2026-03-26 05:22:46.875278 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:22:46.875297 | orchestrator | 2026-03-26 05:22:46.875316 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-26 05:22:46.875335 | orchestrator | Thursday 26 March 2026 05:22:08 +0000 (0:00:01.120) 0:19:31.698 ******** 2026-03-26 05:22:46.875354 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:22:46.875372 | orchestrator | 2026-03-26 05:22:46.875391 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-26 05:22:46.875408 | orchestrator | Thursday 26 March 2026 05:22:09 +0000 (0:00:01.107) 0:19:32.806 ******** 2026-03-26 05:22:46.875425 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:22:46.875443 | orchestrator | 2026-03-26 05:22:46.875464 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-03-26 05:22:46.875484 | orchestrator | Thursday 26 March 2026 05:22:10 +0000 (0:00:01.133) 0:19:33.939 ******** 2026-03-26 05:22:46.875504 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:22:46.875554 | orchestrator | 2026-03-26 05:22:46.875574 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-03-26 05:22:46.875593 | orchestrator | Thursday 26 March 2026 05:22:11 +0000 (0:00:01.194) 0:19:35.133 ******** 2026-03-26 05:22:46.875611 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:22:46.875629 | orchestrator | 2026-03-26 05:22:46.875648 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-03-26 05:22:46.875668 | orchestrator | Thursday 26 March 2026 05:22:12 +0000 (0:00:01.154) 0:19:36.288 ******** 2026-03-26 05:22:46.875687 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:22:46.875706 | orchestrator | 2026-03-26 05:22:46.875725 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-03-26 05:22:46.875746 | orchestrator | Thursday 26 March 2026 05:22:13 +0000 (0:00:01.114) 0:19:37.402 ******** 2026-03-26 05:22:46.875796 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:22:46.875815 | orchestrator | 2026-03-26 05:22:46.875833 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-03-26 05:22:46.875852 | orchestrator | Thursday 26 March 2026 05:22:14 +0000 (0:00:01.123) 0:19:38.526 ******** 2026-03-26 05:22:46.875870 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:22:46.875888 | orchestrator | 2026-03-26 05:22:46.875906 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-03-26 05:22:46.875924 | orchestrator | Thursday 26 March 2026 05:22:15 +0000 (0:00:01.121) 0:19:39.648 ******** 2026-03-26 05:22:46.875942 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:22:46.875960 | orchestrator | 2026-03-26 05:22:46.875978 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-03-26 05:22:46.875994 | orchestrator | Thursday 26 March 2026 05:22:17 +0000 (0:00:01.103) 0:19:40.751 ******** 2026-03-26 05:22:46.876010 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:22:46.876027 | orchestrator | 2026-03-26 05:22:46.876044 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-03-26 05:22:46.876063 | orchestrator | Thursday 26 March 2026 05:22:18 +0000 (0:00:01.113) 0:19:41.865 ******** 2026-03-26 05:22:46.876080 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:22:46.876098 | orchestrator | 2026-03-26 05:22:46.876135 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-03-26 05:22:46.876154 | orchestrator | Thursday 26 March 2026 05:22:19 +0000 (0:00:01.172) 0:19:43.038 ******** 2026-03-26 05:22:46.876171 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:22:46.876187 | orchestrator | 2026-03-26 05:22:46.876205 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-03-26 05:22:46.876224 | orchestrator | Thursday 26 March 2026 05:22:20 +0000 (0:00:01.113) 0:19:44.151 ******** 2026-03-26 05:22:46.876242 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:22:46.876259 | orchestrator | 2026-03-26 05:22:46.876278 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-03-26 05:22:46.876297 | orchestrator | Thursday 26 March 2026 05:22:21 +0000 (0:00:01.187) 0:19:45.339 ******** 2026-03-26 05:22:46.876315 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:22:46.876333 | orchestrator | 2026-03-26 05:22:46.876351 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-03-26 05:22:46.876369 | orchestrator | Thursday 26 March 2026 05:22:22 +0000 (0:00:01.154) 0:19:46.494 ******** 2026-03-26 05:22:46.876387 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:22:46.876405 | orchestrator | 2026-03-26 05:22:46.876424 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-26 05:22:46.876468 | orchestrator | Thursday 26 March 2026 05:22:23 +0000 (0:00:01.131) 0:19:47.625 ******** 2026-03-26 05:22:46.876481 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:22:46.876491 | orchestrator | 2026-03-26 05:22:46.876502 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-26 05:22:46.876513 | orchestrator | Thursday 26 March 2026 05:22:25 +0000 (0:00:01.147) 0:19:48.773 ******** 2026-03-26 05:22:46.876553 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:22:46.876566 | orchestrator | 2026-03-26 05:22:46.876576 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-26 05:22:46.876587 | orchestrator | Thursday 26 March 2026 05:22:26 +0000 (0:00:01.146) 0:19:49.919 ******** 2026-03-26 05:22:46.876598 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:22:46.876608 | orchestrator | 2026-03-26 05:22:46.876619 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-26 05:22:46.876630 | orchestrator | Thursday 26 March 2026 05:22:27 +0000 (0:00:01.144) 0:19:51.064 ******** 2026-03-26 05:22:46.876640 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:22:46.876651 | orchestrator | 2026-03-26 05:22:46.876662 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-26 05:22:46.876686 | orchestrator | Thursday 26 March 2026 05:22:28 +0000 (0:00:01.208) 0:19:52.273 ******** 2026-03-26 05:22:46.876697 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:22:46.876716 | orchestrator | 2026-03-26 05:22:46.876733 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-26 05:22:46.876752 | orchestrator | Thursday 26 March 2026 05:22:29 +0000 (0:00:01.150) 0:19:53.424 ******** 2026-03-26 05:22:46.876769 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:22:46.876787 | orchestrator | 2026-03-26 05:22:46.876805 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-26 05:22:46.876824 | orchestrator | Thursday 26 March 2026 05:22:30 +0000 (0:00:01.177) 0:19:54.602 ******** 2026-03-26 05:22:46.876844 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:22:46.876862 | orchestrator | 2026-03-26 05:22:46.876879 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-26 05:22:46.876890 | orchestrator | Thursday 26 March 2026 05:22:32 +0000 (0:00:01.125) 0:19:55.728 ******** 2026-03-26 05:22:46.876900 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:22:46.876911 | orchestrator | 2026-03-26 05:22:46.876921 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-26 05:22:46.876932 | orchestrator | Thursday 26 March 2026 05:22:33 +0000 (0:00:01.243) 0:19:56.972 ******** 2026-03-26 05:22:46.876942 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:22:46.876953 | orchestrator | 2026-03-26 05:22:46.876964 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-26 05:22:46.876974 | orchestrator | Thursday 26 March 2026 05:22:34 +0000 (0:00:01.092) 0:19:58.064 ******** 2026-03-26 05:22:46.876985 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:22:46.876996 | orchestrator | 2026-03-26 05:22:46.877006 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-26 05:22:46.877017 | orchestrator | Thursday 26 March 2026 05:22:35 +0000 (0:00:01.138) 0:19:59.203 ******** 2026-03-26 05:22:46.877027 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:22:46.877038 | orchestrator | 2026-03-26 05:22:46.877048 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-26 05:22:46.877059 | orchestrator | Thursday 26 March 2026 05:22:36 +0000 (0:00:01.115) 0:20:00.318 ******** 2026-03-26 05:22:46.877069 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:22:46.877080 | orchestrator | 2026-03-26 05:22:46.877090 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-26 05:22:46.877101 | orchestrator | Thursday 26 March 2026 05:22:37 +0000 (0:00:01.113) 0:20:01.432 ******** 2026-03-26 05:22:46.877111 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:22:46.877122 | orchestrator | 2026-03-26 05:22:46.877132 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-26 05:22:46.877143 | orchestrator | Thursday 26 March 2026 05:22:38 +0000 (0:00:01.132) 0:20:02.564 ******** 2026-03-26 05:22:46.877153 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:22:46.877164 | orchestrator | 2026-03-26 05:22:46.877174 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-26 05:22:46.877185 | orchestrator | Thursday 26 March 2026 05:22:40 +0000 (0:00:01.128) 0:20:03.692 ******** 2026-03-26 05:22:46.877195 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:22:46.877206 | orchestrator | 2026-03-26 05:22:46.877216 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-26 05:22:46.877228 | orchestrator | Thursday 26 March 2026 05:22:41 +0000 (0:00:01.147) 0:20:04.840 ******** 2026-03-26 05:22:46.877239 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:22:46.877249 | orchestrator | 2026-03-26 05:22:46.877268 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-26 05:22:46.877279 | orchestrator | Thursday 26 March 2026 05:22:42 +0000 (0:00:01.204) 0:20:06.045 ******** 2026-03-26 05:22:46.877290 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:22:46.877301 | orchestrator | 2026-03-26 05:22:46.877312 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-26 05:22:46.877331 | orchestrator | Thursday 26 March 2026 05:22:43 +0000 (0:00:01.141) 0:20:07.187 ******** 2026-03-26 05:22:46.877342 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:22:46.877353 | orchestrator | 2026-03-26 05:22:46.877364 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-26 05:22:46.877374 | orchestrator | Thursday 26 March 2026 05:22:44 +0000 (0:00:01.124) 0:20:08.312 ******** 2026-03-26 05:22:46.877385 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:22:46.877395 | orchestrator | 2026-03-26 05:22:46.877406 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-26 05:22:46.877416 | orchestrator | Thursday 26 March 2026 05:22:45 +0000 (0:00:01.109) 0:20:09.422 ******** 2026-03-26 05:22:46.877427 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:22:46.877437 | orchestrator | 2026-03-26 05:22:46.877448 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-26 05:22:46.877468 | orchestrator | Thursday 26 March 2026 05:22:46 +0000 (0:00:01.096) 0:20:10.518 ******** 2026-03-26 05:23:24.951300 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:23:24.951418 | orchestrator | 2026-03-26 05:23:24.951435 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-26 05:23:24.951448 | orchestrator | Thursday 26 March 2026 05:22:48 +0000 (0:00:01.165) 0:20:11.684 ******** 2026-03-26 05:23:24.951459 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:23:24.951470 | orchestrator | 2026-03-26 05:23:24.951482 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-26 05:23:24.951492 | orchestrator | Thursday 26 March 2026 05:22:49 +0000 (0:00:01.245) 0:20:12.929 ******** 2026-03-26 05:23:24.951503 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:23:24.951514 | orchestrator | 2026-03-26 05:23:24.951525 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-26 05:23:24.951536 | orchestrator | Thursday 26 March 2026 05:22:50 +0000 (0:00:01.181) 0:20:14.110 ******** 2026-03-26 05:23:24.951576 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:23:24.951589 | orchestrator | 2026-03-26 05:23:24.951600 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-26 05:23:24.951611 | orchestrator | Thursday 26 March 2026 05:22:51 +0000 (0:00:01.283) 0:20:15.393 ******** 2026-03-26 05:23:24.951622 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:23:24.951633 | orchestrator | 2026-03-26 05:23:24.951644 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-26 05:23:24.951655 | orchestrator | Thursday 26 March 2026 05:22:52 +0000 (0:00:01.180) 0:20:16.574 ******** 2026-03-26 05:23:24.951666 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:23:24.951677 | orchestrator | 2026-03-26 05:23:24.951689 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-26 05:23:24.951701 | orchestrator | Thursday 26 March 2026 05:22:54 +0000 (0:00:01.124) 0:20:17.698 ******** 2026-03-26 05:23:24.951712 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:23:24.951723 | orchestrator | 2026-03-26 05:23:24.951734 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-26 05:23:24.951745 | orchestrator | Thursday 26 March 2026 05:22:55 +0000 (0:00:01.155) 0:20:18.853 ******** 2026-03-26 05:23:24.951755 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:23:24.951766 | orchestrator | 2026-03-26 05:23:24.951777 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-26 05:23:24.951788 | orchestrator | Thursday 26 March 2026 05:22:56 +0000 (0:00:01.160) 0:20:20.014 ******** 2026-03-26 05:23:24.951798 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:23:24.951809 | orchestrator | 2026-03-26 05:23:24.951820 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-26 05:23:24.951831 | orchestrator | Thursday 26 March 2026 05:22:57 +0000 (0:00:01.128) 0:20:21.143 ******** 2026-03-26 05:23:24.951867 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:23:24.951881 | orchestrator | 2026-03-26 05:23:24.951893 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-26 05:23:24.951906 | orchestrator | Thursday 26 March 2026 05:22:58 +0000 (0:00:01.155) 0:20:22.298 ******** 2026-03-26 05:23:24.951918 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-03-26 05:23:24.951931 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-03-26 05:23:24.951943 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-03-26 05:23:24.951955 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:23:24.951968 | orchestrator | 2026-03-26 05:23:24.951980 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-26 05:23:24.951992 | orchestrator | Thursday 26 March 2026 05:23:00 +0000 (0:00:01.393) 0:20:23.692 ******** 2026-03-26 05:23:24.952004 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-03-26 05:23:24.952017 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-03-26 05:23:24.952029 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-03-26 05:23:24.952041 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:23:24.952054 | orchestrator | 2026-03-26 05:23:24.952066 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-26 05:23:24.952079 | orchestrator | Thursday 26 March 2026 05:23:01 +0000 (0:00:01.760) 0:20:25.452 ******** 2026-03-26 05:23:24.952091 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-03-26 05:23:24.952103 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-03-26 05:23:24.952115 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-03-26 05:23:24.952127 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:23:24.952139 | orchestrator | 2026-03-26 05:23:24.952166 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-26 05:23:24.952179 | orchestrator | Thursday 26 March 2026 05:23:03 +0000 (0:00:01.710) 0:20:27.162 ******** 2026-03-26 05:23:24.952191 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:23:24.952203 | orchestrator | 2026-03-26 05:23:24.952214 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-26 05:23:24.952225 | orchestrator | Thursday 26 March 2026 05:23:04 +0000 (0:00:01.196) 0:20:28.359 ******** 2026-03-26 05:23:24.952236 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-03-26 05:23:24.952247 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:23:24.952258 | orchestrator | 2026-03-26 05:23:24.952269 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-26 05:23:24.952280 | orchestrator | Thursday 26 March 2026 05:23:05 +0000 (0:00:01.236) 0:20:29.596 ******** 2026-03-26 05:23:24.952291 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:23:24.952302 | orchestrator | 2026-03-26 05:23:24.952313 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-03-26 05:23:24.952324 | orchestrator | Thursday 26 March 2026 05:23:07 +0000 (0:00:01.150) 0:20:30.746 ******** 2026-03-26 05:23:24.952334 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-26 05:23:24.952345 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-26 05:23:24.952356 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-26 05:23:24.952383 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:23:24.952394 | orchestrator | 2026-03-26 05:23:24.952405 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-03-26 05:23:24.952416 | orchestrator | Thursday 26 March 2026 05:23:08 +0000 (0:00:01.401) 0:20:32.148 ******** 2026-03-26 05:23:24.952427 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:23:24.952438 | orchestrator | 2026-03-26 05:23:24.952449 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-03-26 05:23:24.952459 | orchestrator | Thursday 26 March 2026 05:23:09 +0000 (0:00:01.170) 0:20:33.318 ******** 2026-03-26 05:23:24.952470 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:23:24.952489 | orchestrator | 2026-03-26 05:23:24.952500 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-03-26 05:23:24.952511 | orchestrator | Thursday 26 March 2026 05:23:10 +0000 (0:00:01.141) 0:20:34.460 ******** 2026-03-26 05:23:24.952521 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:23:24.952532 | orchestrator | 2026-03-26 05:23:24.952544 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-03-26 05:23:24.952601 | orchestrator | Thursday 26 March 2026 05:23:11 +0000 (0:00:01.169) 0:20:35.630 ******** 2026-03-26 05:23:24.952619 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:23:24.952636 | orchestrator | 2026-03-26 05:23:24.952653 | orchestrator | PLAY [Upgrade ceph mgr nodes when implicitly collocated on monitors] *********** 2026-03-26 05:23:24.952671 | orchestrator | 2026-03-26 05:23:24.952688 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-03-26 05:23:24.952707 | orchestrator | Thursday 26 March 2026 05:23:12 +0000 (0:00:00.967) 0:20:36.598 ******** 2026-03-26 05:23:24.952725 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:23:24.952745 | orchestrator | 2026-03-26 05:23:24.952763 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-26 05:23:24.952783 | orchestrator | Thursday 26 March 2026 05:23:13 +0000 (0:00:00.781) 0:20:37.380 ******** 2026-03-26 05:23:24.952801 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:23:24.952819 | orchestrator | 2026-03-26 05:23:24.952837 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-26 05:23:24.952856 | orchestrator | Thursday 26 March 2026 05:23:14 +0000 (0:00:00.869) 0:20:38.249 ******** 2026-03-26 05:23:24.952875 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:23:24.952895 | orchestrator | 2026-03-26 05:23:24.952913 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-26 05:23:24.952932 | orchestrator | Thursday 26 March 2026 05:23:15 +0000 (0:00:00.820) 0:20:39.070 ******** 2026-03-26 05:23:24.952950 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:23:24.952968 | orchestrator | 2026-03-26 05:23:24.952986 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-26 05:23:24.953005 | orchestrator | Thursday 26 March 2026 05:23:16 +0000 (0:00:00.771) 0:20:39.842 ******** 2026-03-26 05:23:24.953023 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:23:24.953043 | orchestrator | 2026-03-26 05:23:24.953061 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-26 05:23:24.953079 | orchestrator | Thursday 26 March 2026 05:23:16 +0000 (0:00:00.811) 0:20:40.653 ******** 2026-03-26 05:23:24.953098 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:23:24.953116 | orchestrator | 2026-03-26 05:23:24.953135 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-26 05:23:24.953153 | orchestrator | Thursday 26 March 2026 05:23:17 +0000 (0:00:00.792) 0:20:41.446 ******** 2026-03-26 05:23:24.953172 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:23:24.953190 | orchestrator | 2026-03-26 05:23:24.953210 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-26 05:23:24.953228 | orchestrator | Thursday 26 March 2026 05:23:18 +0000 (0:00:00.772) 0:20:42.218 ******** 2026-03-26 05:23:24.953247 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:23:24.953266 | orchestrator | 2026-03-26 05:23:24.953284 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-26 05:23:24.953302 | orchestrator | Thursday 26 March 2026 05:23:19 +0000 (0:00:00.782) 0:20:43.001 ******** 2026-03-26 05:23:24.953320 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:23:24.953338 | orchestrator | 2026-03-26 05:23:24.953356 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-26 05:23:24.953374 | orchestrator | Thursday 26 March 2026 05:23:20 +0000 (0:00:00.798) 0:20:43.799 ******** 2026-03-26 05:23:24.953392 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:23:24.953409 | orchestrator | 2026-03-26 05:23:24.953428 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-26 05:23:24.953460 | orchestrator | Thursday 26 March 2026 05:23:20 +0000 (0:00:00.797) 0:20:44.597 ******** 2026-03-26 05:23:24.953487 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:23:24.953525 | orchestrator | 2026-03-26 05:23:24.953590 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-26 05:23:24.953612 | orchestrator | Thursday 26 March 2026 05:23:21 +0000 (0:00:00.806) 0:20:45.403 ******** 2026-03-26 05:23:24.953631 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:23:24.953648 | orchestrator | 2026-03-26 05:23:24.953666 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-03-26 05:23:24.953685 | orchestrator | Thursday 26 March 2026 05:23:22 +0000 (0:00:00.787) 0:20:46.191 ******** 2026-03-26 05:23:24.953705 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:23:24.953722 | orchestrator | 2026-03-26 05:23:24.953740 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-03-26 05:23:24.953758 | orchestrator | Thursday 26 March 2026 05:23:23 +0000 (0:00:00.808) 0:20:47.000 ******** 2026-03-26 05:23:24.953776 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:23:24.953794 | orchestrator | 2026-03-26 05:23:24.953813 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-03-26 05:23:24.953831 | orchestrator | Thursday 26 March 2026 05:23:24 +0000 (0:00:00.810) 0:20:47.810 ******** 2026-03-26 05:23:24.953851 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:23:24.953868 | orchestrator | 2026-03-26 05:23:24.953888 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-03-26 05:23:24.953911 | orchestrator | Thursday 26 March 2026 05:23:24 +0000 (0:00:00.780) 0:20:48.591 ******** 2026-03-26 05:23:57.120457 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:23:57.120613 | orchestrator | 2026-03-26 05:23:57.120631 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-03-26 05:23:57.120642 | orchestrator | Thursday 26 March 2026 05:23:25 +0000 (0:00:00.798) 0:20:49.389 ******** 2026-03-26 05:23:57.120652 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:23:57.120662 | orchestrator | 2026-03-26 05:23:57.120672 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-03-26 05:23:57.120682 | orchestrator | Thursday 26 March 2026 05:23:26 +0000 (0:00:00.784) 0:20:50.174 ******** 2026-03-26 05:23:57.120691 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:23:57.120701 | orchestrator | 2026-03-26 05:23:57.120710 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-03-26 05:23:57.120719 | orchestrator | Thursday 26 March 2026 05:23:27 +0000 (0:00:00.815) 0:20:50.989 ******** 2026-03-26 05:23:57.120729 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:23:57.120738 | orchestrator | 2026-03-26 05:23:57.120748 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-03-26 05:23:57.120758 | orchestrator | Thursday 26 March 2026 05:23:28 +0000 (0:00:00.775) 0:20:51.765 ******** 2026-03-26 05:23:57.120767 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:23:57.120777 | orchestrator | 2026-03-26 05:23:57.120786 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-03-26 05:23:57.120796 | orchestrator | Thursday 26 March 2026 05:23:28 +0000 (0:00:00.785) 0:20:52.550 ******** 2026-03-26 05:23:57.120805 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:23:57.120814 | orchestrator | 2026-03-26 05:23:57.120824 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-03-26 05:23:57.120833 | orchestrator | Thursday 26 March 2026 05:23:29 +0000 (0:00:00.793) 0:20:53.343 ******** 2026-03-26 05:23:57.120843 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:23:57.120852 | orchestrator | 2026-03-26 05:23:57.120862 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-03-26 05:23:57.120871 | orchestrator | Thursday 26 March 2026 05:23:30 +0000 (0:00:00.771) 0:20:54.115 ******** 2026-03-26 05:23:57.120880 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:23:57.120890 | orchestrator | 2026-03-26 05:23:57.120899 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-03-26 05:23:57.120931 | orchestrator | Thursday 26 March 2026 05:23:31 +0000 (0:00:00.776) 0:20:54.892 ******** 2026-03-26 05:23:57.120941 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:23:57.120951 | orchestrator | 2026-03-26 05:23:57.120960 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-26 05:23:57.120969 | orchestrator | Thursday 26 March 2026 05:23:32 +0000 (0:00:00.767) 0:20:55.659 ******** 2026-03-26 05:23:57.120979 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:23:57.120989 | orchestrator | 2026-03-26 05:23:57.121000 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-26 05:23:57.121010 | orchestrator | Thursday 26 March 2026 05:23:32 +0000 (0:00:00.799) 0:20:56.459 ******** 2026-03-26 05:23:57.121021 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:23:57.121033 | orchestrator | 2026-03-26 05:23:57.121051 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-26 05:23:57.121068 | orchestrator | Thursday 26 March 2026 05:23:33 +0000 (0:00:00.835) 0:20:57.295 ******** 2026-03-26 05:23:57.121086 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:23:57.121103 | orchestrator | 2026-03-26 05:23:57.121120 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-26 05:23:57.121137 | orchestrator | Thursday 26 March 2026 05:23:34 +0000 (0:00:00.813) 0:20:58.108 ******** 2026-03-26 05:23:57.121156 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:23:57.121173 | orchestrator | 2026-03-26 05:23:57.121192 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-26 05:23:57.121210 | orchestrator | Thursday 26 March 2026 05:23:35 +0000 (0:00:00.786) 0:20:58.895 ******** 2026-03-26 05:23:57.121230 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:23:57.121248 | orchestrator | 2026-03-26 05:23:57.121266 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-26 05:23:57.121278 | orchestrator | Thursday 26 March 2026 05:23:36 +0000 (0:00:00.797) 0:20:59.693 ******** 2026-03-26 05:23:57.121290 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:23:57.121300 | orchestrator | 2026-03-26 05:23:57.121310 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-26 05:23:57.121319 | orchestrator | Thursday 26 March 2026 05:23:36 +0000 (0:00:00.758) 0:21:00.451 ******** 2026-03-26 05:23:57.121328 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:23:57.121338 | orchestrator | 2026-03-26 05:23:57.121362 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-26 05:23:57.121372 | orchestrator | Thursday 26 March 2026 05:23:37 +0000 (0:00:00.782) 0:21:01.234 ******** 2026-03-26 05:23:57.121381 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:23:57.121391 | orchestrator | 2026-03-26 05:23:57.121400 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-26 05:23:57.121409 | orchestrator | Thursday 26 March 2026 05:23:38 +0000 (0:00:00.824) 0:21:02.058 ******** 2026-03-26 05:23:57.121419 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:23:57.121428 | orchestrator | 2026-03-26 05:23:57.121438 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-26 05:23:57.121447 | orchestrator | Thursday 26 March 2026 05:23:39 +0000 (0:00:00.784) 0:21:02.843 ******** 2026-03-26 05:23:57.121456 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:23:57.121465 | orchestrator | 2026-03-26 05:23:57.121475 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-26 05:23:57.121484 | orchestrator | Thursday 26 March 2026 05:23:39 +0000 (0:00:00.767) 0:21:03.610 ******** 2026-03-26 05:23:57.121493 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:23:57.121503 | orchestrator | 2026-03-26 05:23:57.121513 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-26 05:23:57.121522 | orchestrator | Thursday 26 March 2026 05:23:40 +0000 (0:00:00.867) 0:21:04.478 ******** 2026-03-26 05:23:57.121532 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:23:57.121541 | orchestrator | 2026-03-26 05:23:57.121602 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-26 05:23:57.121626 | orchestrator | Thursday 26 March 2026 05:23:41 +0000 (0:00:00.785) 0:21:05.263 ******** 2026-03-26 05:23:57.121636 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:23:57.121645 | orchestrator | 2026-03-26 05:23:57.121654 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-26 05:23:57.121664 | orchestrator | Thursday 26 March 2026 05:23:42 +0000 (0:00:00.801) 0:21:06.065 ******** 2026-03-26 05:23:57.121673 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:23:57.121683 | orchestrator | 2026-03-26 05:23:57.121692 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-26 05:23:57.121702 | orchestrator | Thursday 26 March 2026 05:23:43 +0000 (0:00:00.780) 0:21:06.845 ******** 2026-03-26 05:23:57.121711 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:23:57.121720 | orchestrator | 2026-03-26 05:23:57.121730 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-26 05:23:57.121740 | orchestrator | Thursday 26 March 2026 05:23:43 +0000 (0:00:00.780) 0:21:07.625 ******** 2026-03-26 05:23:57.121750 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:23:57.121759 | orchestrator | 2026-03-26 05:23:57.121769 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-26 05:23:57.121778 | orchestrator | Thursday 26 March 2026 05:23:44 +0000 (0:00:00.836) 0:21:08.462 ******** 2026-03-26 05:23:57.121788 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:23:57.121797 | orchestrator | 2026-03-26 05:23:57.121807 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-26 05:23:57.121816 | orchestrator | Thursday 26 March 2026 05:23:45 +0000 (0:00:00.786) 0:21:09.249 ******** 2026-03-26 05:23:57.121826 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:23:57.121835 | orchestrator | 2026-03-26 05:23:57.121845 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-26 05:23:57.121854 | orchestrator | Thursday 26 March 2026 05:23:46 +0000 (0:00:00.763) 0:21:10.012 ******** 2026-03-26 05:23:57.121863 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:23:57.121873 | orchestrator | 2026-03-26 05:23:57.121882 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-26 05:23:57.121892 | orchestrator | Thursday 26 March 2026 05:23:47 +0000 (0:00:00.797) 0:21:10.810 ******** 2026-03-26 05:23:57.121901 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:23:57.121911 | orchestrator | 2026-03-26 05:23:57.121920 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-26 05:23:57.121929 | orchestrator | Thursday 26 March 2026 05:23:47 +0000 (0:00:00.761) 0:21:11.572 ******** 2026-03-26 05:23:57.121939 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:23:57.121948 | orchestrator | 2026-03-26 05:23:57.121958 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-26 05:23:57.121967 | orchestrator | Thursday 26 March 2026 05:23:48 +0000 (0:00:00.781) 0:21:12.353 ******** 2026-03-26 05:23:57.121976 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:23:57.121986 | orchestrator | 2026-03-26 05:23:57.121995 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-26 05:23:57.122005 | orchestrator | Thursday 26 March 2026 05:23:49 +0000 (0:00:00.898) 0:21:13.252 ******** 2026-03-26 05:23:57.122069 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:23:57.122081 | orchestrator | 2026-03-26 05:23:57.122090 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-26 05:23:57.122100 | orchestrator | Thursday 26 March 2026 05:23:50 +0000 (0:00:00.781) 0:21:14.033 ******** 2026-03-26 05:23:57.122109 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:23:57.122119 | orchestrator | 2026-03-26 05:23:57.122128 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-26 05:23:57.122147 | orchestrator | Thursday 26 March 2026 05:23:51 +0000 (0:00:00.885) 0:21:14.919 ******** 2026-03-26 05:23:57.122157 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:23:57.122174 | orchestrator | 2026-03-26 05:23:57.122184 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-26 05:23:57.122193 | orchestrator | Thursday 26 March 2026 05:23:52 +0000 (0:00:00.816) 0:21:15.735 ******** 2026-03-26 05:23:57.122203 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:23:57.122212 | orchestrator | 2026-03-26 05:23:57.122222 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-26 05:23:57.122233 | orchestrator | Thursday 26 March 2026 05:23:52 +0000 (0:00:00.775) 0:21:16.511 ******** 2026-03-26 05:23:57.122248 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:23:57.122258 | orchestrator | 2026-03-26 05:23:57.122267 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-26 05:23:57.122277 | orchestrator | Thursday 26 March 2026 05:23:53 +0000 (0:00:00.787) 0:21:17.298 ******** 2026-03-26 05:23:57.122286 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:23:57.122295 | orchestrator | 2026-03-26 05:23:57.122305 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-26 05:23:57.122314 | orchestrator | Thursday 26 March 2026 05:23:54 +0000 (0:00:00.806) 0:21:18.104 ******** 2026-03-26 05:23:57.122324 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:23:57.122333 | orchestrator | 2026-03-26 05:23:57.122342 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-26 05:23:57.122352 | orchestrator | Thursday 26 March 2026 05:23:55 +0000 (0:00:00.815) 0:21:18.920 ******** 2026-03-26 05:23:57.122361 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:23:57.122370 | orchestrator | 2026-03-26 05:23:57.122380 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-26 05:23:57.122389 | orchestrator | Thursday 26 March 2026 05:23:56 +0000 (0:00:00.816) 0:21:19.737 ******** 2026-03-26 05:23:57.122398 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-03-26 05:23:57.122408 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-03-26 05:23:57.122425 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-03-26 05:24:28.141302 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:24:28.141448 | orchestrator | 2026-03-26 05:24:28.141465 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-26 05:24:28.141479 | orchestrator | Thursday 26 March 2026 05:23:57 +0000 (0:00:01.025) 0:21:20.762 ******** 2026-03-26 05:24:28.141491 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-03-26 05:24:28.141503 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-03-26 05:24:28.141514 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-03-26 05:24:28.141525 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:24:28.141536 | orchestrator | 2026-03-26 05:24:28.141547 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-26 05:24:28.141558 | orchestrator | Thursday 26 March 2026 05:23:58 +0000 (0:00:01.084) 0:21:21.846 ******** 2026-03-26 05:24:28.141569 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-03-26 05:24:28.141580 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-03-26 05:24:28.141644 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-03-26 05:24:28.141655 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:24:28.141666 | orchestrator | 2026-03-26 05:24:28.141677 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-26 05:24:28.141688 | orchestrator | Thursday 26 March 2026 05:23:59 +0000 (0:00:01.080) 0:21:22.927 ******** 2026-03-26 05:24:28.141699 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:24:28.141710 | orchestrator | 2026-03-26 05:24:28.141721 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-26 05:24:28.141732 | orchestrator | Thursday 26 March 2026 05:24:00 +0000 (0:00:00.766) 0:21:23.693 ******** 2026-03-26 05:24:28.141744 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-03-26 05:24:28.141782 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:24:28.141793 | orchestrator | 2026-03-26 05:24:28.141804 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-26 05:24:28.141817 | orchestrator | Thursday 26 March 2026 05:24:00 +0000 (0:00:00.911) 0:21:24.605 ******** 2026-03-26 05:24:28.141830 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:24:28.141843 | orchestrator | 2026-03-26 05:24:28.141855 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-03-26 05:24:28.141867 | orchestrator | Thursday 26 March 2026 05:24:01 +0000 (0:00:00.804) 0:21:25.410 ******** 2026-03-26 05:24:28.141879 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-26 05:24:28.141892 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-26 05:24:28.141904 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-26 05:24:28.141916 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:24:28.141929 | orchestrator | 2026-03-26 05:24:28.141941 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-03-26 05:24:28.141953 | orchestrator | Thursday 26 March 2026 05:24:03 +0000 (0:00:01.438) 0:21:26.848 ******** 2026-03-26 05:24:28.141964 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:24:28.141974 | orchestrator | 2026-03-26 05:24:28.141985 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-03-26 05:24:28.141996 | orchestrator | Thursday 26 March 2026 05:24:03 +0000 (0:00:00.788) 0:21:27.637 ******** 2026-03-26 05:24:28.142006 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:24:28.142082 | orchestrator | 2026-03-26 05:24:28.142094 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-03-26 05:24:28.142105 | orchestrator | Thursday 26 March 2026 05:24:04 +0000 (0:00:00.815) 0:21:28.452 ******** 2026-03-26 05:24:28.142116 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:24:28.142126 | orchestrator | 2026-03-26 05:24:28.142137 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-03-26 05:24:28.142147 | orchestrator | Thursday 26 March 2026 05:24:05 +0000 (0:00:00.790) 0:21:29.243 ******** 2026-03-26 05:24:28.142158 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:24:28.142168 | orchestrator | 2026-03-26 05:24:28.142179 | orchestrator | PLAY [Upgrade ceph mgr nodes when implicitly collocated on monitors] *********** 2026-03-26 05:24:28.142190 | orchestrator | 2026-03-26 05:24:28.142200 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-03-26 05:24:28.142211 | orchestrator | Thursday 26 March 2026 05:24:06 +0000 (0:00:01.318) 0:21:30.561 ******** 2026-03-26 05:24:28.142222 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:24:28.142233 | orchestrator | 2026-03-26 05:24:28.142243 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-26 05:24:28.142273 | orchestrator | Thursday 26 March 2026 05:24:07 +0000 (0:00:00.785) 0:21:31.347 ******** 2026-03-26 05:24:28.142284 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:24:28.142295 | orchestrator | 2026-03-26 05:24:28.142306 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-26 05:24:28.142316 | orchestrator | Thursday 26 March 2026 05:24:08 +0000 (0:00:00.769) 0:21:32.116 ******** 2026-03-26 05:24:28.142327 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:24:28.142337 | orchestrator | 2026-03-26 05:24:28.142348 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-26 05:24:28.142358 | orchestrator | Thursday 26 March 2026 05:24:09 +0000 (0:00:00.788) 0:21:32.905 ******** 2026-03-26 05:24:28.142369 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:24:28.142380 | orchestrator | 2026-03-26 05:24:28.142390 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-26 05:24:28.142401 | orchestrator | Thursday 26 March 2026 05:24:10 +0000 (0:00:00.760) 0:21:33.665 ******** 2026-03-26 05:24:28.142412 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:24:28.142422 | orchestrator | 2026-03-26 05:24:28.142433 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-26 05:24:28.142453 | orchestrator | Thursday 26 March 2026 05:24:10 +0000 (0:00:00.800) 0:21:34.466 ******** 2026-03-26 05:24:28.142464 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:24:28.142474 | orchestrator | 2026-03-26 05:24:28.142485 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-26 05:24:28.142515 | orchestrator | Thursday 26 March 2026 05:24:11 +0000 (0:00:00.767) 0:21:35.233 ******** 2026-03-26 05:24:28.142527 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:24:28.142538 | orchestrator | 2026-03-26 05:24:28.142549 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-26 05:24:28.142559 | orchestrator | Thursday 26 March 2026 05:24:12 +0000 (0:00:00.764) 0:21:35.998 ******** 2026-03-26 05:24:28.142570 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:24:28.142581 | orchestrator | 2026-03-26 05:24:28.142631 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-26 05:24:28.142642 | orchestrator | Thursday 26 March 2026 05:24:13 +0000 (0:00:00.774) 0:21:36.772 ******** 2026-03-26 05:24:28.142653 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:24:28.142663 | orchestrator | 2026-03-26 05:24:28.142674 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-26 05:24:28.142685 | orchestrator | Thursday 26 March 2026 05:24:13 +0000 (0:00:00.810) 0:21:37.583 ******** 2026-03-26 05:24:28.142695 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:24:28.142706 | orchestrator | 2026-03-26 05:24:28.142717 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-26 05:24:28.142727 | orchestrator | Thursday 26 March 2026 05:24:14 +0000 (0:00:00.778) 0:21:38.362 ******** 2026-03-26 05:24:28.142738 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:24:28.142749 | orchestrator | 2026-03-26 05:24:28.142759 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-26 05:24:28.142770 | orchestrator | Thursday 26 March 2026 05:24:15 +0000 (0:00:00.781) 0:21:39.144 ******** 2026-03-26 05:24:28.142781 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:24:28.142792 | orchestrator | 2026-03-26 05:24:28.142802 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-03-26 05:24:28.142813 | orchestrator | Thursday 26 March 2026 05:24:16 +0000 (0:00:00.772) 0:21:39.916 ******** 2026-03-26 05:24:28.142824 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:24:28.142834 | orchestrator | 2026-03-26 05:24:28.142845 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-03-26 05:24:28.142855 | orchestrator | Thursday 26 March 2026 05:24:17 +0000 (0:00:00.838) 0:21:40.755 ******** 2026-03-26 05:24:28.142866 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:24:28.142877 | orchestrator | 2026-03-26 05:24:28.142887 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-03-26 05:24:28.142898 | orchestrator | Thursday 26 March 2026 05:24:17 +0000 (0:00:00.785) 0:21:41.540 ******** 2026-03-26 05:24:28.142909 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:24:28.142919 | orchestrator | 2026-03-26 05:24:28.142930 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-03-26 05:24:28.142940 | orchestrator | Thursday 26 March 2026 05:24:18 +0000 (0:00:00.772) 0:21:42.313 ******** 2026-03-26 05:24:28.142951 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:24:28.142961 | orchestrator | 2026-03-26 05:24:28.142972 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-03-26 05:24:28.142983 | orchestrator | Thursday 26 March 2026 05:24:19 +0000 (0:00:00.789) 0:21:43.102 ******** 2026-03-26 05:24:28.142993 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:24:28.143004 | orchestrator | 2026-03-26 05:24:28.143014 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-03-26 05:24:28.143025 | orchestrator | Thursday 26 March 2026 05:24:20 +0000 (0:00:00.770) 0:21:43.873 ******** 2026-03-26 05:24:28.143036 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:24:28.143046 | orchestrator | 2026-03-26 05:24:28.143057 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-03-26 05:24:28.143075 | orchestrator | Thursday 26 March 2026 05:24:20 +0000 (0:00:00.758) 0:21:44.632 ******** 2026-03-26 05:24:28.143086 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:24:28.143097 | orchestrator | 2026-03-26 05:24:28.143108 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-03-26 05:24:28.143120 | orchestrator | Thursday 26 March 2026 05:24:21 +0000 (0:00:00.793) 0:21:45.425 ******** 2026-03-26 05:24:28.143130 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:24:28.143141 | orchestrator | 2026-03-26 05:24:28.143152 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-03-26 05:24:28.143162 | orchestrator | Thursday 26 March 2026 05:24:22 +0000 (0:00:00.833) 0:21:46.258 ******** 2026-03-26 05:24:28.143173 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:24:28.143183 | orchestrator | 2026-03-26 05:24:28.143194 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-03-26 05:24:28.143204 | orchestrator | Thursday 26 March 2026 05:24:23 +0000 (0:00:00.781) 0:21:47.040 ******** 2026-03-26 05:24:28.143215 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:24:28.143226 | orchestrator | 2026-03-26 05:24:28.143243 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-03-26 05:24:28.143254 | orchestrator | Thursday 26 March 2026 05:24:24 +0000 (0:00:00.810) 0:21:47.851 ******** 2026-03-26 05:24:28.143265 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:24:28.143275 | orchestrator | 2026-03-26 05:24:28.143286 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-03-26 05:24:28.143297 | orchestrator | Thursday 26 March 2026 05:24:24 +0000 (0:00:00.778) 0:21:48.630 ******** 2026-03-26 05:24:28.143307 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:24:28.143318 | orchestrator | 2026-03-26 05:24:28.143329 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-26 05:24:28.143339 | orchestrator | Thursday 26 March 2026 05:24:25 +0000 (0:00:00.769) 0:21:49.399 ******** 2026-03-26 05:24:28.143350 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:24:28.143361 | orchestrator | 2026-03-26 05:24:28.143371 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-26 05:24:28.143382 | orchestrator | Thursday 26 March 2026 05:24:26 +0000 (0:00:00.815) 0:21:50.215 ******** 2026-03-26 05:24:28.143392 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:24:28.143403 | orchestrator | 2026-03-26 05:24:28.143414 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-26 05:24:28.143424 | orchestrator | Thursday 26 March 2026 05:24:27 +0000 (0:00:00.807) 0:21:51.023 ******** 2026-03-26 05:24:28.143442 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:24:58.715873 | orchestrator | 2026-03-26 05:24:58.715997 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-26 05:24:58.716015 | orchestrator | Thursday 26 March 2026 05:24:28 +0000 (0:00:00.766) 0:21:51.789 ******** 2026-03-26 05:24:58.716027 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:24:58.716039 | orchestrator | 2026-03-26 05:24:58.716051 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-26 05:24:58.716062 | orchestrator | Thursday 26 March 2026 05:24:28 +0000 (0:00:00.771) 0:21:52.560 ******** 2026-03-26 05:24:58.716073 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:24:58.716083 | orchestrator | 2026-03-26 05:24:58.716094 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-26 05:24:58.716106 | orchestrator | Thursday 26 March 2026 05:24:29 +0000 (0:00:00.755) 0:21:53.316 ******** 2026-03-26 05:24:58.716116 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:24:58.716127 | orchestrator | 2026-03-26 05:24:58.716138 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-26 05:24:58.716149 | orchestrator | Thursday 26 March 2026 05:24:30 +0000 (0:00:00.756) 0:21:54.072 ******** 2026-03-26 05:24:58.716160 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:24:58.716170 | orchestrator | 2026-03-26 05:24:58.716204 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-26 05:24:58.716216 | orchestrator | Thursday 26 March 2026 05:24:31 +0000 (0:00:00.767) 0:21:54.840 ******** 2026-03-26 05:24:58.716226 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:24:58.716237 | orchestrator | 2026-03-26 05:24:58.716247 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-26 05:24:58.716258 | orchestrator | Thursday 26 March 2026 05:24:32 +0000 (0:00:00.897) 0:21:55.738 ******** 2026-03-26 05:24:58.716268 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:24:58.716278 | orchestrator | 2026-03-26 05:24:58.716289 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-26 05:24:58.716300 | orchestrator | Thursday 26 March 2026 05:24:32 +0000 (0:00:00.824) 0:21:56.563 ******** 2026-03-26 05:24:58.716310 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:24:58.716321 | orchestrator | 2026-03-26 05:24:58.716331 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-26 05:24:58.716342 | orchestrator | Thursday 26 March 2026 05:24:33 +0000 (0:00:00.821) 0:21:57.385 ******** 2026-03-26 05:24:58.716352 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:24:58.716363 | orchestrator | 2026-03-26 05:24:58.716373 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-26 05:24:58.716384 | orchestrator | Thursday 26 March 2026 05:24:34 +0000 (0:00:00.776) 0:21:58.162 ******** 2026-03-26 05:24:58.716397 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:24:58.716409 | orchestrator | 2026-03-26 05:24:58.716421 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-26 05:24:58.716433 | orchestrator | Thursday 26 March 2026 05:24:35 +0000 (0:00:00.772) 0:21:58.935 ******** 2026-03-26 05:24:58.716445 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:24:58.716457 | orchestrator | 2026-03-26 05:24:58.716470 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-26 05:24:58.716481 | orchestrator | Thursday 26 March 2026 05:24:36 +0000 (0:00:00.775) 0:21:59.710 ******** 2026-03-26 05:24:58.716493 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:24:58.716505 | orchestrator | 2026-03-26 05:24:58.716517 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-26 05:24:58.716529 | orchestrator | Thursday 26 March 2026 05:24:36 +0000 (0:00:00.794) 0:22:00.505 ******** 2026-03-26 05:24:58.716541 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:24:58.716553 | orchestrator | 2026-03-26 05:24:58.716565 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-26 05:24:58.716579 | orchestrator | Thursday 26 March 2026 05:24:37 +0000 (0:00:00.774) 0:22:01.279 ******** 2026-03-26 05:24:58.716591 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:24:58.716627 | orchestrator | 2026-03-26 05:24:58.716640 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-26 05:24:58.716652 | orchestrator | Thursday 26 March 2026 05:24:38 +0000 (0:00:00.774) 0:22:02.054 ******** 2026-03-26 05:24:58.716665 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:24:58.716677 | orchestrator | 2026-03-26 05:24:58.716689 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-26 05:24:58.716701 | orchestrator | Thursday 26 March 2026 05:24:39 +0000 (0:00:00.806) 0:22:02.860 ******** 2026-03-26 05:24:58.716713 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:24:58.716725 | orchestrator | 2026-03-26 05:24:58.716753 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-26 05:24:58.716764 | orchestrator | Thursday 26 March 2026 05:24:39 +0000 (0:00:00.757) 0:22:03.617 ******** 2026-03-26 05:24:58.716775 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:24:58.716785 | orchestrator | 2026-03-26 05:24:58.716796 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-26 05:24:58.716806 | orchestrator | Thursday 26 March 2026 05:24:40 +0000 (0:00:00.791) 0:22:04.409 ******** 2026-03-26 05:24:58.716825 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:24:58.716836 | orchestrator | 2026-03-26 05:24:58.716847 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-26 05:24:58.716857 | orchestrator | Thursday 26 March 2026 05:24:41 +0000 (0:00:00.815) 0:22:05.225 ******** 2026-03-26 05:24:58.716867 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:24:58.716878 | orchestrator | 2026-03-26 05:24:58.716889 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-26 05:24:58.716899 | orchestrator | Thursday 26 March 2026 05:24:42 +0000 (0:00:00.811) 0:22:06.036 ******** 2026-03-26 05:24:58.716910 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:24:58.716920 | orchestrator | 2026-03-26 05:24:58.716931 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-26 05:24:58.716941 | orchestrator | Thursday 26 March 2026 05:24:43 +0000 (0:00:00.899) 0:22:06.936 ******** 2026-03-26 05:24:58.716968 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:24:58.716980 | orchestrator | 2026-03-26 05:24:58.716991 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-26 05:24:58.717001 | orchestrator | Thursday 26 March 2026 05:24:44 +0000 (0:00:00.782) 0:22:07.719 ******** 2026-03-26 05:24:58.717012 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:24:58.717023 | orchestrator | 2026-03-26 05:24:58.717033 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-26 05:24:58.717044 | orchestrator | Thursday 26 March 2026 05:24:44 +0000 (0:00:00.858) 0:22:08.577 ******** 2026-03-26 05:24:58.717054 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:24:58.717065 | orchestrator | 2026-03-26 05:24:58.717076 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-26 05:24:58.717086 | orchestrator | Thursday 26 March 2026 05:24:45 +0000 (0:00:00.774) 0:22:09.352 ******** 2026-03-26 05:24:58.717097 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:24:58.717107 | orchestrator | 2026-03-26 05:24:58.717118 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-26 05:24:58.717130 | orchestrator | Thursday 26 March 2026 05:24:46 +0000 (0:00:00.814) 0:22:10.166 ******** 2026-03-26 05:24:58.717141 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:24:58.717152 | orchestrator | 2026-03-26 05:24:58.717162 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-26 05:24:58.717173 | orchestrator | Thursday 26 March 2026 05:24:47 +0000 (0:00:00.837) 0:22:11.004 ******** 2026-03-26 05:24:58.717183 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:24:58.717194 | orchestrator | 2026-03-26 05:24:58.717205 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-26 05:24:58.717215 | orchestrator | Thursday 26 March 2026 05:24:48 +0000 (0:00:00.763) 0:22:11.767 ******** 2026-03-26 05:24:58.717226 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:24:58.717237 | orchestrator | 2026-03-26 05:24:58.717247 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-26 05:24:58.717258 | orchestrator | Thursday 26 March 2026 05:24:48 +0000 (0:00:00.787) 0:22:12.555 ******** 2026-03-26 05:24:58.717269 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:24:58.717279 | orchestrator | 2026-03-26 05:24:58.717290 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-26 05:24:58.717300 | orchestrator | Thursday 26 March 2026 05:24:49 +0000 (0:00:00.774) 0:22:13.330 ******** 2026-03-26 05:24:58.717311 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-03-26 05:24:58.717322 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-03-26 05:24:58.717332 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-03-26 05:24:58.717343 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:24:58.717354 | orchestrator | 2026-03-26 05:24:58.717364 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-26 05:24:58.717375 | orchestrator | Thursday 26 March 2026 05:24:50 +0000 (0:00:01.089) 0:22:14.419 ******** 2026-03-26 05:24:58.717392 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-03-26 05:24:58.717403 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-03-26 05:24:58.717413 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-03-26 05:24:58.717424 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:24:58.717434 | orchestrator | 2026-03-26 05:24:58.717445 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-26 05:24:58.717456 | orchestrator | Thursday 26 March 2026 05:24:52 +0000 (0:00:01.469) 0:22:15.889 ******** 2026-03-26 05:24:58.717466 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-03-26 05:24:58.717477 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-03-26 05:24:58.717487 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-03-26 05:24:58.717498 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:24:58.717509 | orchestrator | 2026-03-26 05:24:58.717519 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-26 05:24:58.717530 | orchestrator | Thursday 26 March 2026 05:24:53 +0000 (0:00:01.376) 0:22:17.266 ******** 2026-03-26 05:24:58.717543 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:24:58.717561 | orchestrator | 2026-03-26 05:24:58.717579 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-26 05:24:58.717598 | orchestrator | Thursday 26 March 2026 05:24:54 +0000 (0:00:00.795) 0:22:18.061 ******** 2026-03-26 05:24:58.717652 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-03-26 05:24:58.717678 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:24:58.717696 | orchestrator | 2026-03-26 05:24:58.717714 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-26 05:24:58.717732 | orchestrator | Thursday 26 March 2026 05:24:55 +0000 (0:00:00.901) 0:22:18.963 ******** 2026-03-26 05:24:58.717749 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:24:58.717768 | orchestrator | 2026-03-26 05:24:58.717786 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-03-26 05:24:58.717804 | orchestrator | Thursday 26 March 2026 05:24:56 +0000 (0:00:00.795) 0:22:19.758 ******** 2026-03-26 05:24:58.717818 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-26 05:24:58.717828 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-26 05:24:58.717839 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-26 05:24:58.717850 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:24:58.717860 | orchestrator | 2026-03-26 05:24:58.717871 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-03-26 05:24:58.717881 | orchestrator | Thursday 26 March 2026 05:24:57 +0000 (0:00:01.055) 0:22:20.814 ******** 2026-03-26 05:24:58.717892 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:24:58.717903 | orchestrator | 2026-03-26 05:24:58.717913 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-03-26 05:24:58.717924 | orchestrator | Thursday 26 March 2026 05:24:57 +0000 (0:00:00.778) 0:22:21.593 ******** 2026-03-26 05:24:58.717943 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:25:39.657322 | orchestrator | 2026-03-26 05:25:39.657465 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-03-26 05:25:39.657483 | orchestrator | Thursday 26 March 2026 05:24:58 +0000 (0:00:00.766) 0:22:22.360 ******** 2026-03-26 05:25:39.657495 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:25:39.657507 | orchestrator | 2026-03-26 05:25:39.657518 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-03-26 05:25:39.657529 | orchestrator | Thursday 26 March 2026 05:24:59 +0000 (0:00:00.809) 0:22:23.170 ******** 2026-03-26 05:25:39.657540 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:25:39.657551 | orchestrator | 2026-03-26 05:25:39.657561 | orchestrator | PLAY [Upgrade ceph mgr nodes] ************************************************** 2026-03-26 05:25:39.657572 | orchestrator | 2026-03-26 05:25:39.657583 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-03-26 05:25:39.657617 | orchestrator | Thursday 26 March 2026 05:25:00 +0000 (0:00:01.409) 0:22:24.579 ******** 2026-03-26 05:25:39.657687 | orchestrator | changed: [testbed-node-0] 2026-03-26 05:25:39.657708 | orchestrator | 2026-03-26 05:25:39.657728 | orchestrator | TASK [Mask ceph mgr systemd unit] ********************************************** 2026-03-26 05:25:39.657739 | orchestrator | Thursday 26 March 2026 05:25:13 +0000 (0:00:12.895) 0:22:37.474 ******** 2026-03-26 05:25:39.657749 | orchestrator | changed: [testbed-node-0] 2026-03-26 05:25:39.657760 | orchestrator | 2026-03-26 05:25:39.657770 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-26 05:25:39.657781 | orchestrator | Thursday 26 March 2026 05:25:16 +0000 (0:00:02.590) 0:22:40.065 ******** 2026-03-26 05:25:39.657792 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0 2026-03-26 05:25:39.657803 | orchestrator | 2026-03-26 05:25:39.657813 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-26 05:25:39.657824 | orchestrator | Thursday 26 March 2026 05:25:17 +0000 (0:00:01.141) 0:22:41.206 ******** 2026-03-26 05:25:39.657834 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:25:39.657846 | orchestrator | 2026-03-26 05:25:39.657857 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-26 05:25:39.657869 | orchestrator | Thursday 26 March 2026 05:25:19 +0000 (0:00:01.524) 0:22:42.731 ******** 2026-03-26 05:25:39.657881 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:25:39.657893 | orchestrator | 2026-03-26 05:25:39.657905 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-26 05:25:39.657917 | orchestrator | Thursday 26 March 2026 05:25:20 +0000 (0:00:01.146) 0:22:43.877 ******** 2026-03-26 05:25:39.657929 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:25:39.657941 | orchestrator | 2026-03-26 05:25:39.657953 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-26 05:25:39.657964 | orchestrator | Thursday 26 March 2026 05:25:21 +0000 (0:00:01.532) 0:22:45.410 ******** 2026-03-26 05:25:39.657976 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:25:39.657989 | orchestrator | 2026-03-26 05:25:39.658001 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-26 05:25:39.658013 | orchestrator | Thursday 26 March 2026 05:25:22 +0000 (0:00:01.118) 0:22:46.528 ******** 2026-03-26 05:25:39.658086 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:25:39.658098 | orchestrator | 2026-03-26 05:25:39.658110 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-26 05:25:39.658123 | orchestrator | Thursday 26 March 2026 05:25:24 +0000 (0:00:01.136) 0:22:47.665 ******** 2026-03-26 05:25:39.658135 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:25:39.658158 | orchestrator | 2026-03-26 05:25:39.658170 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-26 05:25:39.658183 | orchestrator | Thursday 26 March 2026 05:25:25 +0000 (0:00:01.154) 0:22:48.820 ******** 2026-03-26 05:25:39.658195 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:25:39.658207 | orchestrator | 2026-03-26 05:25:39.658218 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-26 05:25:39.658229 | orchestrator | Thursday 26 March 2026 05:25:26 +0000 (0:00:01.157) 0:22:49.977 ******** 2026-03-26 05:25:39.658240 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:25:39.658250 | orchestrator | 2026-03-26 05:25:39.658261 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-26 05:25:39.658271 | orchestrator | Thursday 26 March 2026 05:25:27 +0000 (0:00:01.099) 0:22:51.077 ******** 2026-03-26 05:25:39.658282 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-26 05:25:39.658293 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-26 05:25:39.658318 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-26 05:25:39.658329 | orchestrator | 2026-03-26 05:25:39.658340 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-26 05:25:39.658360 | orchestrator | Thursday 26 March 2026 05:25:29 +0000 (0:00:01.962) 0:22:53.040 ******** 2026-03-26 05:25:39.658371 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:25:39.658381 | orchestrator | 2026-03-26 05:25:39.658392 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-26 05:25:39.658402 | orchestrator | Thursday 26 March 2026 05:25:30 +0000 (0:00:01.203) 0:22:54.244 ******** 2026-03-26 05:25:39.658413 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-26 05:25:39.658424 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-26 05:25:39.658434 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-26 05:25:39.658445 | orchestrator | 2026-03-26 05:25:39.658455 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-26 05:25:39.658466 | orchestrator | Thursday 26 March 2026 05:25:33 +0000 (0:00:03.180) 0:22:57.424 ******** 2026-03-26 05:25:39.658477 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-26 05:25:39.658488 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-26 05:25:39.658499 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-26 05:25:39.658509 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:25:39.658520 | orchestrator | 2026-03-26 05:25:39.658549 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-26 05:25:39.658561 | orchestrator | Thursday 26 March 2026 05:25:35 +0000 (0:00:01.779) 0:22:59.203 ******** 2026-03-26 05:25:39.658573 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-26 05:25:39.658587 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-26 05:25:39.658598 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-26 05:25:39.658609 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:25:39.658619 | orchestrator | 2026-03-26 05:25:39.658669 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-26 05:25:39.658682 | orchestrator | Thursday 26 March 2026 05:25:37 +0000 (0:00:01.665) 0:23:00.869 ******** 2026-03-26 05:25:39.658695 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-26 05:25:39.658709 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-26 05:25:39.658721 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-26 05:25:39.658740 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:25:39.658751 | orchestrator | 2026-03-26 05:25:39.658762 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-26 05:25:39.658773 | orchestrator | Thursday 26 March 2026 05:25:38 +0000 (0:00:01.192) 0:23:02.061 ******** 2026-03-26 05:25:39.658792 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'de9c3b4c4c57', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-26 05:25:31.452374', 'end': '2026-03-26 05:25:31.504874', 'delta': '0:00:00.052500', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['de9c3b4c4c57'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-26 05:25:39.658808 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'd66b87272f8e', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-26 05:25:32.004082', 'end': '2026-03-26 05:25:32.055136', 'delta': '0:00:00.051054', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['d66b87272f8e'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-26 05:25:39.658828 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'b850f8fd4697', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-26 05:25:32.553894', 'end': '2026-03-26 05:25:32.609169', 'delta': '0:00:00.055275', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['b850f8fd4697'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-26 05:25:58.384583 | orchestrator | 2026-03-26 05:25:58.384725 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-26 05:25:58.384738 | orchestrator | Thursday 26 March 2026 05:25:39 +0000 (0:00:01.241) 0:23:03.303 ******** 2026-03-26 05:25:58.384745 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:25:58.384752 | orchestrator | 2026-03-26 05:25:58.384759 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-26 05:25:58.384765 | orchestrator | Thursday 26 March 2026 05:25:40 +0000 (0:00:01.309) 0:23:04.612 ******** 2026-03-26 05:25:58.384771 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:25:58.384779 | orchestrator | 2026-03-26 05:25:58.384786 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-26 05:25:58.384792 | orchestrator | Thursday 26 March 2026 05:25:42 +0000 (0:00:01.286) 0:23:05.898 ******** 2026-03-26 05:25:58.384798 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:25:58.384804 | orchestrator | 2026-03-26 05:25:58.384809 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-26 05:25:58.384816 | orchestrator | Thursday 26 March 2026 05:25:43 +0000 (0:00:01.138) 0:23:07.037 ******** 2026-03-26 05:25:58.384821 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:25:58.384827 | orchestrator | 2026-03-26 05:25:58.384833 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-26 05:25:58.384856 | orchestrator | Thursday 26 March 2026 05:25:45 +0000 (0:00:02.009) 0:23:09.046 ******** 2026-03-26 05:25:58.384862 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:25:58.384868 | orchestrator | 2026-03-26 05:25:58.384874 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-26 05:25:58.384880 | orchestrator | Thursday 26 March 2026 05:25:46 +0000 (0:00:01.180) 0:23:10.226 ******** 2026-03-26 05:25:58.384885 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:25:58.384891 | orchestrator | 2026-03-26 05:25:58.384897 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-26 05:25:58.384902 | orchestrator | Thursday 26 March 2026 05:25:47 +0000 (0:00:01.129) 0:23:11.356 ******** 2026-03-26 05:25:58.384908 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:25:58.384914 | orchestrator | 2026-03-26 05:25:58.384920 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-26 05:25:58.384925 | orchestrator | Thursday 26 March 2026 05:25:48 +0000 (0:00:01.190) 0:23:12.547 ******** 2026-03-26 05:25:58.384931 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:25:58.384937 | orchestrator | 2026-03-26 05:25:58.384943 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-26 05:25:58.384948 | orchestrator | Thursday 26 March 2026 05:25:50 +0000 (0:00:01.181) 0:23:13.728 ******** 2026-03-26 05:25:58.384954 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:25:58.384960 | orchestrator | 2026-03-26 05:25:58.384966 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-26 05:25:58.384971 | orchestrator | Thursday 26 March 2026 05:25:51 +0000 (0:00:01.150) 0:23:14.879 ******** 2026-03-26 05:25:58.384977 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:25:58.384983 | orchestrator | 2026-03-26 05:25:58.384989 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-26 05:25:58.384995 | orchestrator | Thursday 26 March 2026 05:25:52 +0000 (0:00:01.189) 0:23:16.069 ******** 2026-03-26 05:25:58.385000 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:25:58.385006 | orchestrator | 2026-03-26 05:25:58.385012 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-26 05:25:58.385017 | orchestrator | Thursday 26 March 2026 05:25:53 +0000 (0:00:01.160) 0:23:17.229 ******** 2026-03-26 05:25:58.385034 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:25:58.385040 | orchestrator | 2026-03-26 05:25:58.385046 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-26 05:25:58.385052 | orchestrator | Thursday 26 March 2026 05:25:54 +0000 (0:00:01.140) 0:23:18.370 ******** 2026-03-26 05:25:58.385057 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:25:58.385063 | orchestrator | 2026-03-26 05:25:58.385069 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-26 05:25:58.385075 | orchestrator | Thursday 26 March 2026 05:25:55 +0000 (0:00:01.182) 0:23:19.553 ******** 2026-03-26 05:25:58.385081 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:25:58.385087 | orchestrator | 2026-03-26 05:25:58.385093 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-26 05:25:58.385098 | orchestrator | Thursday 26 March 2026 05:25:57 +0000 (0:00:01.126) 0:23:20.679 ******** 2026-03-26 05:25:58.385107 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:25:58.385115 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:25:58.385139 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:25:58.385147 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-26-01-38-12-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-26 05:25:58.385156 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:25:58.385163 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:25:58.385170 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:25:58.385189 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c374eb4c-3572-4f0b-927c-38d35765f44a', 'scsi-SQEMU_QEMU_HARDDISK_c374eb4c-3572-4f0b-927c-38d35765f44a'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'c374eb4c', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c374eb4c-3572-4f0b-927c-38d35765f44a-part16', 'scsi-SQEMU_QEMU_HARDDISK_c374eb4c-3572-4f0b-927c-38d35765f44a-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c374eb4c-3572-4f0b-927c-38d35765f44a-part14', 'scsi-SQEMU_QEMU_HARDDISK_c374eb4c-3572-4f0b-927c-38d35765f44a-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c374eb4c-3572-4f0b-927c-38d35765f44a-part15', 'scsi-SQEMU_QEMU_HARDDISK_c374eb4c-3572-4f0b-927c-38d35765f44a-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c374eb4c-3572-4f0b-927c-38d35765f44a-part1', 'scsi-SQEMU_QEMU_HARDDISK_c374eb4c-3572-4f0b-927c-38d35765f44a-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-26 05:25:59.595211 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:25:59.595315 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:25:59.595332 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:25:59.595345 | orchestrator | 2026-03-26 05:25:59.595357 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-26 05:25:59.595369 | orchestrator | Thursday 26 March 2026 05:25:58 +0000 (0:00:01.346) 0:23:22.026 ******** 2026-03-26 05:25:59.595383 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:25:59.595398 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:25:59.595426 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:25:59.595439 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-26-01-38-12-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:25:59.595490 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:25:59.595503 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:25:59.595515 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:25:59.595535 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c374eb4c-3572-4f0b-927c-38d35765f44a', 'scsi-SQEMU_QEMU_HARDDISK_c374eb4c-3572-4f0b-927c-38d35765f44a'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'c374eb4c', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c374eb4c-3572-4f0b-927c-38d35765f44a-part16', 'scsi-SQEMU_QEMU_HARDDISK_c374eb4c-3572-4f0b-927c-38d35765f44a-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c374eb4c-3572-4f0b-927c-38d35765f44a-part14', 'scsi-SQEMU_QEMU_HARDDISK_c374eb4c-3572-4f0b-927c-38d35765f44a-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c374eb4c-3572-4f0b-927c-38d35765f44a-part15', 'scsi-SQEMU_QEMU_HARDDISK_c374eb4c-3572-4f0b-927c-38d35765f44a-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c374eb4c-3572-4f0b-927c-38d35765f44a-part1', 'scsi-SQEMU_QEMU_HARDDISK_c374eb4c-3572-4f0b-927c-38d35765f44a-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:25:59.595564 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:26:39.617895 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:26:39.618073 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:26:39.618096 | orchestrator | 2026-03-26 05:26:39.618109 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-26 05:26:39.618122 | orchestrator | Thursday 26 March 2026 05:25:59 +0000 (0:00:01.217) 0:23:23.244 ******** 2026-03-26 05:26:39.618133 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:26:39.618144 | orchestrator | 2026-03-26 05:26:39.618155 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-26 05:26:39.618166 | orchestrator | Thursday 26 March 2026 05:26:01 +0000 (0:00:01.553) 0:23:24.797 ******** 2026-03-26 05:26:39.618177 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:26:39.618188 | orchestrator | 2026-03-26 05:26:39.618199 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-26 05:26:39.618210 | orchestrator | Thursday 26 March 2026 05:26:02 +0000 (0:00:01.183) 0:23:25.980 ******** 2026-03-26 05:26:39.618230 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:26:39.618241 | orchestrator | 2026-03-26 05:26:39.618252 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-26 05:26:39.618263 | orchestrator | Thursday 26 March 2026 05:26:03 +0000 (0:00:01.524) 0:23:27.505 ******** 2026-03-26 05:26:39.618274 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:26:39.618284 | orchestrator | 2026-03-26 05:26:39.618295 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-26 05:26:39.618306 | orchestrator | Thursday 26 March 2026 05:26:04 +0000 (0:00:01.131) 0:23:28.636 ******** 2026-03-26 05:26:39.618317 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:26:39.618327 | orchestrator | 2026-03-26 05:26:39.618338 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-26 05:26:39.618349 | orchestrator | Thursday 26 March 2026 05:26:06 +0000 (0:00:01.244) 0:23:29.880 ******** 2026-03-26 05:26:39.618359 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:26:39.618370 | orchestrator | 2026-03-26 05:26:39.618381 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-26 05:26:39.618392 | orchestrator | Thursday 26 March 2026 05:26:07 +0000 (0:00:01.138) 0:23:31.019 ******** 2026-03-26 05:26:39.618442 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-26 05:26:39.618456 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-26 05:26:39.618468 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-26 05:26:39.618480 | orchestrator | 2026-03-26 05:26:39.618493 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-26 05:26:39.618506 | orchestrator | Thursday 26 March 2026 05:26:09 +0000 (0:00:02.118) 0:23:33.138 ******** 2026-03-26 05:26:39.618516 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-26 05:26:39.618527 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-26 05:26:39.618538 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-26 05:26:39.618548 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:26:39.618559 | orchestrator | 2026-03-26 05:26:39.618570 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-26 05:26:39.618580 | orchestrator | Thursday 26 March 2026 05:26:10 +0000 (0:00:01.158) 0:23:34.296 ******** 2026-03-26 05:26:39.618591 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:26:39.618601 | orchestrator | 2026-03-26 05:26:39.618612 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-26 05:26:39.618623 | orchestrator | Thursday 26 March 2026 05:26:11 +0000 (0:00:01.167) 0:23:35.463 ******** 2026-03-26 05:26:39.618633 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-26 05:26:39.618644 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-26 05:26:39.618655 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-26 05:26:39.618702 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-26 05:26:39.618714 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-26 05:26:39.618724 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-26 05:26:39.618735 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-26 05:26:39.618746 | orchestrator | 2026-03-26 05:26:39.618756 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-26 05:26:39.618767 | orchestrator | Thursday 26 March 2026 05:26:13 +0000 (0:00:01.837) 0:23:37.300 ******** 2026-03-26 05:26:39.618777 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-26 05:26:39.618788 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-26 05:26:39.618799 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-26 05:26:39.618809 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-26 05:26:39.618837 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-26 05:26:39.618849 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-26 05:26:39.618860 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-26 05:26:39.618870 | orchestrator | 2026-03-26 05:26:39.618881 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-26 05:26:39.618892 | orchestrator | Thursday 26 March 2026 05:26:16 +0000 (0:00:02.588) 0:23:39.889 ******** 2026-03-26 05:26:39.618903 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0 2026-03-26 05:26:39.618914 | orchestrator | 2026-03-26 05:26:39.618925 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-26 05:26:39.618935 | orchestrator | Thursday 26 March 2026 05:26:17 +0000 (0:00:01.119) 0:23:41.009 ******** 2026-03-26 05:26:39.618946 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0 2026-03-26 05:26:39.618969 | orchestrator | 2026-03-26 05:26:39.618980 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-26 05:26:39.618990 | orchestrator | Thursday 26 March 2026 05:26:18 +0000 (0:00:01.097) 0:23:42.107 ******** 2026-03-26 05:26:39.619001 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:26:39.619012 | orchestrator | 2026-03-26 05:26:39.619022 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-26 05:26:39.619033 | orchestrator | Thursday 26 March 2026 05:26:20 +0000 (0:00:01.558) 0:23:43.665 ******** 2026-03-26 05:26:39.619043 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:26:39.619054 | orchestrator | 2026-03-26 05:26:39.619065 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-26 05:26:39.619075 | orchestrator | Thursday 26 March 2026 05:26:21 +0000 (0:00:01.112) 0:23:44.778 ******** 2026-03-26 05:26:39.619086 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:26:39.619096 | orchestrator | 2026-03-26 05:26:39.619107 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-26 05:26:39.619117 | orchestrator | Thursday 26 March 2026 05:26:22 +0000 (0:00:01.139) 0:23:45.918 ******** 2026-03-26 05:26:39.619128 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:26:39.619138 | orchestrator | 2026-03-26 05:26:39.619149 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-26 05:26:39.619159 | orchestrator | Thursday 26 March 2026 05:26:23 +0000 (0:00:01.106) 0:23:47.024 ******** 2026-03-26 05:26:39.619170 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:26:39.619181 | orchestrator | 2026-03-26 05:26:39.619191 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-26 05:26:39.619202 | orchestrator | Thursday 26 March 2026 05:26:24 +0000 (0:00:01.584) 0:23:48.609 ******** 2026-03-26 05:26:39.619213 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:26:39.619224 | orchestrator | 2026-03-26 05:26:39.619234 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-26 05:26:39.619250 | orchestrator | Thursday 26 March 2026 05:26:26 +0000 (0:00:01.162) 0:23:49.772 ******** 2026-03-26 05:26:39.619261 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:26:39.619272 | orchestrator | 2026-03-26 05:26:39.619282 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-26 05:26:39.619293 | orchestrator | Thursday 26 March 2026 05:26:27 +0000 (0:00:01.154) 0:23:50.927 ******** 2026-03-26 05:26:39.619304 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:26:39.619314 | orchestrator | 2026-03-26 05:26:39.619325 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-26 05:26:39.619335 | orchestrator | Thursday 26 March 2026 05:26:28 +0000 (0:00:01.595) 0:23:52.522 ******** 2026-03-26 05:26:39.619346 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:26:39.619356 | orchestrator | 2026-03-26 05:26:39.619367 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-26 05:26:39.619377 | orchestrator | Thursday 26 March 2026 05:26:30 +0000 (0:00:01.566) 0:23:54.089 ******** 2026-03-26 05:26:39.619388 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:26:39.619399 | orchestrator | 2026-03-26 05:26:39.619409 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-26 05:26:39.619420 | orchestrator | Thursday 26 March 2026 05:26:31 +0000 (0:00:01.126) 0:23:55.216 ******** 2026-03-26 05:26:39.619430 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:26:39.619441 | orchestrator | 2026-03-26 05:26:39.619451 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-26 05:26:39.619462 | orchestrator | Thursday 26 March 2026 05:26:32 +0000 (0:00:01.179) 0:23:56.396 ******** 2026-03-26 05:26:39.619472 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:26:39.619483 | orchestrator | 2026-03-26 05:26:39.619494 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-26 05:26:39.619504 | orchestrator | Thursday 26 March 2026 05:26:33 +0000 (0:00:01.135) 0:23:57.531 ******** 2026-03-26 05:26:39.619515 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:26:39.619532 | orchestrator | 2026-03-26 05:26:39.619543 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-26 05:26:39.619553 | orchestrator | Thursday 26 March 2026 05:26:34 +0000 (0:00:01.113) 0:23:58.644 ******** 2026-03-26 05:26:39.619564 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:26:39.619574 | orchestrator | 2026-03-26 05:26:39.619585 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-26 05:26:39.619595 | orchestrator | Thursday 26 March 2026 05:26:36 +0000 (0:00:01.131) 0:23:59.776 ******** 2026-03-26 05:26:39.619606 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:26:39.619616 | orchestrator | 2026-03-26 05:26:39.619627 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-26 05:26:39.619638 | orchestrator | Thursday 26 March 2026 05:26:37 +0000 (0:00:01.113) 0:24:00.890 ******** 2026-03-26 05:26:39.619648 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:26:39.619659 | orchestrator | 2026-03-26 05:26:39.619690 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-26 05:26:39.619701 | orchestrator | Thursday 26 March 2026 05:26:38 +0000 (0:00:01.143) 0:24:02.033 ******** 2026-03-26 05:26:39.619720 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:27:28.929658 | orchestrator | 2026-03-26 05:27:28.929920 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-26 05:27:28.929948 | orchestrator | Thursday 26 March 2026 05:26:39 +0000 (0:00:01.229) 0:24:03.263 ******** 2026-03-26 05:27:28.929961 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:27:28.929973 | orchestrator | 2026-03-26 05:27:28.929985 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-26 05:27:28.929996 | orchestrator | Thursday 26 March 2026 05:26:40 +0000 (0:00:01.177) 0:24:04.441 ******** 2026-03-26 05:27:28.930007 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:27:28.930081 | orchestrator | 2026-03-26 05:27:28.930094 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-03-26 05:27:28.930105 | orchestrator | Thursday 26 March 2026 05:26:41 +0000 (0:00:01.178) 0:24:05.619 ******** 2026-03-26 05:27:28.930116 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:27:28.930129 | orchestrator | 2026-03-26 05:27:28.930140 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-03-26 05:27:28.930153 | orchestrator | Thursday 26 March 2026 05:26:43 +0000 (0:00:01.167) 0:24:06.788 ******** 2026-03-26 05:27:28.930165 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:27:28.930178 | orchestrator | 2026-03-26 05:27:28.930190 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-03-26 05:27:28.930203 | orchestrator | Thursday 26 March 2026 05:26:44 +0000 (0:00:01.177) 0:24:07.965 ******** 2026-03-26 05:27:28.930215 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:27:28.930228 | orchestrator | 2026-03-26 05:27:28.930240 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-03-26 05:27:28.930262 | orchestrator | Thursday 26 March 2026 05:26:45 +0000 (0:00:01.153) 0:24:09.119 ******** 2026-03-26 05:27:28.930281 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:27:28.930301 | orchestrator | 2026-03-26 05:27:28.930320 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-03-26 05:27:28.930340 | orchestrator | Thursday 26 March 2026 05:26:46 +0000 (0:00:01.126) 0:24:10.245 ******** 2026-03-26 05:27:28.930360 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:27:28.930380 | orchestrator | 2026-03-26 05:27:28.930400 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-03-26 05:27:28.930420 | orchestrator | Thursday 26 March 2026 05:26:47 +0000 (0:00:01.145) 0:24:11.390 ******** 2026-03-26 05:27:28.930439 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:27:28.930458 | orchestrator | 2026-03-26 05:27:28.930477 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-03-26 05:27:28.930496 | orchestrator | Thursday 26 March 2026 05:26:48 +0000 (0:00:01.144) 0:24:12.535 ******** 2026-03-26 05:27:28.930517 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:27:28.930570 | orchestrator | 2026-03-26 05:27:28.930590 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-03-26 05:27:28.930612 | orchestrator | Thursday 26 March 2026 05:26:50 +0000 (0:00:01.167) 0:24:13.702 ******** 2026-03-26 05:27:28.930632 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:27:28.930652 | orchestrator | 2026-03-26 05:27:28.930672 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-03-26 05:27:28.930714 | orchestrator | Thursday 26 March 2026 05:26:51 +0000 (0:00:01.168) 0:24:14.871 ******** 2026-03-26 05:27:28.930735 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:27:28.930753 | orchestrator | 2026-03-26 05:27:28.930771 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-03-26 05:27:28.930790 | orchestrator | Thursday 26 March 2026 05:26:52 +0000 (0:00:01.152) 0:24:16.024 ******** 2026-03-26 05:27:28.930809 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:27:28.930828 | orchestrator | 2026-03-26 05:27:28.930845 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-03-26 05:27:28.930862 | orchestrator | Thursday 26 March 2026 05:26:53 +0000 (0:00:01.171) 0:24:17.195 ******** 2026-03-26 05:27:28.930879 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:27:28.930897 | orchestrator | 2026-03-26 05:27:28.930915 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-03-26 05:27:28.930933 | orchestrator | Thursday 26 March 2026 05:26:54 +0000 (0:00:01.142) 0:24:18.337 ******** 2026-03-26 05:27:28.930951 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:27:28.930969 | orchestrator | 2026-03-26 05:27:28.930988 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-26 05:27:28.931006 | orchestrator | Thursday 26 March 2026 05:26:55 +0000 (0:00:01.128) 0:24:19.466 ******** 2026-03-26 05:27:28.931026 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:27:28.931045 | orchestrator | 2026-03-26 05:27:28.931063 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-26 05:27:28.931082 | orchestrator | Thursday 26 March 2026 05:26:57 +0000 (0:00:01.990) 0:24:21.457 ******** 2026-03-26 05:27:28.931101 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:27:28.931119 | orchestrator | 2026-03-26 05:27:28.931134 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-26 05:27:28.931145 | orchestrator | Thursday 26 March 2026 05:27:00 +0000 (0:00:02.349) 0:24:23.807 ******** 2026-03-26 05:27:28.931156 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0 2026-03-26 05:27:28.931168 | orchestrator | 2026-03-26 05:27:28.931179 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-26 05:27:28.931189 | orchestrator | Thursday 26 March 2026 05:27:01 +0000 (0:00:01.169) 0:24:24.976 ******** 2026-03-26 05:27:28.931200 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:27:28.931211 | orchestrator | 2026-03-26 05:27:28.931221 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-26 05:27:28.931232 | orchestrator | Thursday 26 March 2026 05:27:02 +0000 (0:00:01.153) 0:24:26.129 ******** 2026-03-26 05:27:28.931243 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:27:28.931253 | orchestrator | 2026-03-26 05:27:28.931264 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-26 05:27:28.931275 | orchestrator | Thursday 26 March 2026 05:27:03 +0000 (0:00:01.119) 0:24:27.249 ******** 2026-03-26 05:27:28.931308 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-26 05:27:28.931320 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-26 05:27:28.931331 | orchestrator | 2026-03-26 05:27:28.931341 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-26 05:27:28.931352 | orchestrator | Thursday 26 March 2026 05:27:05 +0000 (0:00:01.818) 0:24:29.067 ******** 2026-03-26 05:27:28.931363 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:27:28.931373 | orchestrator | 2026-03-26 05:27:28.931384 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-26 05:27:28.931408 | orchestrator | Thursday 26 March 2026 05:27:06 +0000 (0:00:01.522) 0:24:30.590 ******** 2026-03-26 05:27:28.931419 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:27:28.931429 | orchestrator | 2026-03-26 05:27:28.931440 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-26 05:27:28.931450 | orchestrator | Thursday 26 March 2026 05:27:08 +0000 (0:00:01.244) 0:24:31.834 ******** 2026-03-26 05:27:28.931461 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:27:28.931472 | orchestrator | 2026-03-26 05:27:28.931482 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-26 05:27:28.931493 | orchestrator | Thursday 26 March 2026 05:27:09 +0000 (0:00:01.141) 0:24:32.976 ******** 2026-03-26 05:27:28.931504 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:27:28.931515 | orchestrator | 2026-03-26 05:27:28.931525 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-26 05:27:28.931586 | orchestrator | Thursday 26 March 2026 05:27:10 +0000 (0:00:01.148) 0:24:34.125 ******** 2026-03-26 05:27:28.931598 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0 2026-03-26 05:27:28.931609 | orchestrator | 2026-03-26 05:27:28.931620 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-26 05:27:28.931631 | orchestrator | Thursday 26 March 2026 05:27:11 +0000 (0:00:01.260) 0:24:35.385 ******** 2026-03-26 05:27:28.931641 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:27:28.931652 | orchestrator | 2026-03-26 05:27:28.931667 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-26 05:27:28.931686 | orchestrator | Thursday 26 March 2026 05:27:13 +0000 (0:00:01.823) 0:24:37.208 ******** 2026-03-26 05:27:28.931743 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-26 05:27:28.931763 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-26 05:27:28.931781 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-26 05:27:28.931799 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:27:28.931818 | orchestrator | 2026-03-26 05:27:28.931837 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-26 05:27:28.931856 | orchestrator | Thursday 26 March 2026 05:27:14 +0000 (0:00:01.112) 0:24:38.321 ******** 2026-03-26 05:27:28.931883 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:27:28.931895 | orchestrator | 2026-03-26 05:27:28.931906 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-26 05:27:28.931917 | orchestrator | Thursday 26 March 2026 05:27:15 +0000 (0:00:01.131) 0:24:39.452 ******** 2026-03-26 05:27:28.931928 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:27:28.931938 | orchestrator | 2026-03-26 05:27:28.931949 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-26 05:27:28.931960 | orchestrator | Thursday 26 March 2026 05:27:16 +0000 (0:00:01.147) 0:24:40.599 ******** 2026-03-26 05:27:28.931970 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:27:28.931981 | orchestrator | 2026-03-26 05:27:28.931992 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-26 05:27:28.932003 | orchestrator | Thursday 26 March 2026 05:27:18 +0000 (0:00:01.135) 0:24:41.735 ******** 2026-03-26 05:27:28.932014 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:27:28.932025 | orchestrator | 2026-03-26 05:27:28.932035 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-26 05:27:28.932046 | orchestrator | Thursday 26 March 2026 05:27:19 +0000 (0:00:01.173) 0:24:42.909 ******** 2026-03-26 05:27:28.932056 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:27:28.932067 | orchestrator | 2026-03-26 05:27:28.932078 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-26 05:27:28.932088 | orchestrator | Thursday 26 March 2026 05:27:20 +0000 (0:00:01.175) 0:24:44.084 ******** 2026-03-26 05:27:28.932111 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:27:28.932122 | orchestrator | 2026-03-26 05:27:28.932133 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-26 05:27:28.932144 | orchestrator | Thursday 26 March 2026 05:27:23 +0000 (0:00:02.611) 0:24:46.696 ******** 2026-03-26 05:27:28.932154 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:27:28.932165 | orchestrator | 2026-03-26 05:27:28.932175 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-26 05:27:28.932192 | orchestrator | Thursday 26 March 2026 05:27:24 +0000 (0:00:01.159) 0:24:47.855 ******** 2026-03-26 05:27:28.932210 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0 2026-03-26 05:27:28.932228 | orchestrator | 2026-03-26 05:27:28.932246 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-26 05:27:28.932263 | orchestrator | Thursday 26 March 2026 05:27:25 +0000 (0:00:01.298) 0:24:49.154 ******** 2026-03-26 05:27:28.932282 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:27:28.932301 | orchestrator | 2026-03-26 05:27:28.932319 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-26 05:27:28.932336 | orchestrator | Thursday 26 March 2026 05:27:26 +0000 (0:00:01.141) 0:24:50.295 ******** 2026-03-26 05:27:28.932347 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:27:28.932357 | orchestrator | 2026-03-26 05:27:28.932368 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-26 05:27:28.932379 | orchestrator | Thursday 26 March 2026 05:27:27 +0000 (0:00:01.144) 0:24:51.440 ******** 2026-03-26 05:27:28.932389 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:27:28.932400 | orchestrator | 2026-03-26 05:27:28.932423 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-26 05:28:12.195769 | orchestrator | Thursday 26 March 2026 05:27:28 +0000 (0:00:01.127) 0:24:52.568 ******** 2026-03-26 05:28:12.195875 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:28:12.195890 | orchestrator | 2026-03-26 05:28:12.195900 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-26 05:28:12.195907 | orchestrator | Thursday 26 March 2026 05:27:30 +0000 (0:00:01.194) 0:24:53.763 ******** 2026-03-26 05:28:12.195915 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:28:12.195921 | orchestrator | 2026-03-26 05:28:12.195929 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-26 05:28:12.195936 | orchestrator | Thursday 26 March 2026 05:27:31 +0000 (0:00:01.142) 0:24:54.905 ******** 2026-03-26 05:28:12.195944 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:28:12.195951 | orchestrator | 2026-03-26 05:28:12.195958 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-26 05:28:12.195965 | orchestrator | Thursday 26 March 2026 05:27:32 +0000 (0:00:01.173) 0:24:56.079 ******** 2026-03-26 05:28:12.195972 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:28:12.195979 | orchestrator | 2026-03-26 05:28:12.195986 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-26 05:28:12.195992 | orchestrator | Thursday 26 March 2026 05:27:33 +0000 (0:00:01.149) 0:24:57.228 ******** 2026-03-26 05:28:12.195998 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:28:12.196005 | orchestrator | 2026-03-26 05:28:12.196011 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-26 05:28:12.196017 | orchestrator | Thursday 26 March 2026 05:27:34 +0000 (0:00:01.161) 0:24:58.390 ******** 2026-03-26 05:28:12.196024 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:28:12.196031 | orchestrator | 2026-03-26 05:28:12.196037 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-26 05:28:12.196044 | orchestrator | Thursday 26 March 2026 05:27:35 +0000 (0:00:01.158) 0:24:59.548 ******** 2026-03-26 05:28:12.196050 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0 2026-03-26 05:28:12.196058 | orchestrator | 2026-03-26 05:28:12.196065 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-26 05:28:12.196092 | orchestrator | Thursday 26 March 2026 05:27:37 +0000 (0:00:01.118) 0:25:00.667 ******** 2026-03-26 05:28:12.196099 | orchestrator | ok: [testbed-node-0] => (item=/etc/ceph) 2026-03-26 05:28:12.196106 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/) 2026-03-26 05:28:12.196114 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-03-26 05:28:12.196120 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-03-26 05:28:12.196127 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-03-26 05:28:12.196133 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-03-26 05:28:12.196153 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-03-26 05:28:12.196159 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-03-26 05:28:12.196166 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-26 05:28:12.196172 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-26 05:28:12.196179 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-26 05:28:12.196185 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-26 05:28:12.196192 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-26 05:28:12.196198 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-26 05:28:12.196205 | orchestrator | ok: [testbed-node-0] => (item=/var/run/ceph) 2026-03-26 05:28:12.196212 | orchestrator | ok: [testbed-node-0] => (item=/var/log/ceph) 2026-03-26 05:28:12.196218 | orchestrator | 2026-03-26 05:28:12.196225 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-26 05:28:12.196231 | orchestrator | Thursday 26 March 2026 05:27:44 +0000 (0:00:07.036) 0:25:07.703 ******** 2026-03-26 05:28:12.196237 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:28:12.196244 | orchestrator | 2026-03-26 05:28:12.196250 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-26 05:28:12.196257 | orchestrator | Thursday 26 March 2026 05:27:45 +0000 (0:00:01.134) 0:25:08.837 ******** 2026-03-26 05:28:12.196263 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:28:12.196270 | orchestrator | 2026-03-26 05:28:12.196277 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-26 05:28:12.196283 | orchestrator | Thursday 26 March 2026 05:27:46 +0000 (0:00:01.117) 0:25:09.955 ******** 2026-03-26 05:28:12.196290 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:28:12.196296 | orchestrator | 2026-03-26 05:28:12.196302 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-26 05:28:12.196308 | orchestrator | Thursday 26 March 2026 05:27:47 +0000 (0:00:01.153) 0:25:11.109 ******** 2026-03-26 05:28:12.196315 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:28:12.196322 | orchestrator | 2026-03-26 05:28:12.196329 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-26 05:28:12.196336 | orchestrator | Thursday 26 March 2026 05:27:48 +0000 (0:00:01.133) 0:25:12.243 ******** 2026-03-26 05:28:12.196343 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:28:12.196349 | orchestrator | 2026-03-26 05:28:12.196355 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-26 05:28:12.196362 | orchestrator | Thursday 26 March 2026 05:27:49 +0000 (0:00:01.139) 0:25:13.382 ******** 2026-03-26 05:28:12.196368 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:28:12.196375 | orchestrator | 2026-03-26 05:28:12.196382 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-26 05:28:12.196389 | orchestrator | Thursday 26 March 2026 05:27:50 +0000 (0:00:01.108) 0:25:14.490 ******** 2026-03-26 05:28:12.196396 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:28:12.196402 | orchestrator | 2026-03-26 05:28:12.196428 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-26 05:28:12.196436 | orchestrator | Thursday 26 March 2026 05:27:51 +0000 (0:00:01.102) 0:25:15.593 ******** 2026-03-26 05:28:12.196452 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:28:12.196459 | orchestrator | 2026-03-26 05:28:12.196465 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-26 05:28:12.196471 | orchestrator | Thursday 26 March 2026 05:27:53 +0000 (0:00:01.116) 0:25:16.710 ******** 2026-03-26 05:28:12.196478 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:28:12.196485 | orchestrator | 2026-03-26 05:28:12.196492 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-26 05:28:12.196499 | orchestrator | Thursday 26 March 2026 05:27:54 +0000 (0:00:01.112) 0:25:17.823 ******** 2026-03-26 05:28:12.196506 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:28:12.196512 | orchestrator | 2026-03-26 05:28:12.196519 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-26 05:28:12.196526 | orchestrator | Thursday 26 March 2026 05:27:55 +0000 (0:00:01.125) 0:25:18.949 ******** 2026-03-26 05:28:12.196532 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:28:12.196540 | orchestrator | 2026-03-26 05:28:12.196547 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-26 05:28:12.196554 | orchestrator | Thursday 26 March 2026 05:27:56 +0000 (0:00:01.133) 0:25:20.082 ******** 2026-03-26 05:28:12.196560 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:28:12.196567 | orchestrator | 2026-03-26 05:28:12.196574 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-26 05:28:12.196582 | orchestrator | Thursday 26 March 2026 05:27:57 +0000 (0:00:01.154) 0:25:21.237 ******** 2026-03-26 05:28:12.196589 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:28:12.196596 | orchestrator | 2026-03-26 05:28:12.196603 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-26 05:28:12.196609 | orchestrator | Thursday 26 March 2026 05:27:58 +0000 (0:00:01.192) 0:25:22.429 ******** 2026-03-26 05:28:12.196616 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:28:12.196623 | orchestrator | 2026-03-26 05:28:12.196630 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-26 05:28:12.196636 | orchestrator | Thursday 26 March 2026 05:27:59 +0000 (0:00:01.157) 0:25:23.586 ******** 2026-03-26 05:28:12.196643 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:28:12.196650 | orchestrator | 2026-03-26 05:28:12.196656 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-26 05:28:12.196663 | orchestrator | Thursday 26 March 2026 05:28:01 +0000 (0:00:01.294) 0:25:24.880 ******** 2026-03-26 05:28:12.196669 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:28:12.196676 | orchestrator | 2026-03-26 05:28:12.196683 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-26 05:28:12.196690 | orchestrator | Thursday 26 March 2026 05:28:02 +0000 (0:00:01.262) 0:25:26.143 ******** 2026-03-26 05:28:12.196705 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:28:12.196739 | orchestrator | 2026-03-26 05:28:12.196748 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-26 05:28:12.196757 | orchestrator | Thursday 26 March 2026 05:28:03 +0000 (0:00:01.094) 0:25:27.238 ******** 2026-03-26 05:28:12.196764 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:28:12.196770 | orchestrator | 2026-03-26 05:28:12.196776 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-26 05:28:12.196785 | orchestrator | Thursday 26 March 2026 05:28:04 +0000 (0:00:01.118) 0:25:28.357 ******** 2026-03-26 05:28:12.196791 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:28:12.196797 | orchestrator | 2026-03-26 05:28:12.196803 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-26 05:28:12.196809 | orchestrator | Thursday 26 March 2026 05:28:05 +0000 (0:00:01.147) 0:25:29.504 ******** 2026-03-26 05:28:12.196815 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:28:12.196821 | orchestrator | 2026-03-26 05:28:12.196827 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-26 05:28:12.196841 | orchestrator | Thursday 26 March 2026 05:28:06 +0000 (0:00:00.973) 0:25:30.477 ******** 2026-03-26 05:28:12.196847 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:28:12.196853 | orchestrator | 2026-03-26 05:28:12.196860 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-26 05:28:12.196866 | orchestrator | Thursday 26 March 2026 05:28:07 +0000 (0:00:01.029) 0:25:31.507 ******** 2026-03-26 05:28:12.196872 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-03-26 05:28:12.196878 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-03-26 05:28:12.196884 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-03-26 05:28:12.196890 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:28:12.196896 | orchestrator | 2026-03-26 05:28:12.196902 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-26 05:28:12.196908 | orchestrator | Thursday 26 March 2026 05:28:09 +0000 (0:00:01.337) 0:25:32.844 ******** 2026-03-26 05:28:12.196914 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-03-26 05:28:12.196920 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-03-26 05:28:12.196926 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-03-26 05:28:12.196937 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:28:12.196943 | orchestrator | 2026-03-26 05:28:12.196949 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-26 05:28:12.196955 | orchestrator | Thursday 26 March 2026 05:28:10 +0000 (0:00:01.380) 0:25:34.225 ******** 2026-03-26 05:28:12.196961 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-03-26 05:28:12.196967 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-03-26 05:28:12.196973 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-03-26 05:28:12.196979 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:28:12.196985 | orchestrator | 2026-03-26 05:28:12.197002 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-26 05:29:16.346354 | orchestrator | Thursday 26 March 2026 05:28:12 +0000 (0:00:01.607) 0:25:35.833 ******** 2026-03-26 05:29:16.346446 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:29:16.346457 | orchestrator | 2026-03-26 05:29:16.346464 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-26 05:29:16.346471 | orchestrator | Thursday 26 March 2026 05:28:13 +0000 (0:00:01.149) 0:25:36.982 ******** 2026-03-26 05:29:16.346478 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-03-26 05:29:16.346484 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:29:16.346491 | orchestrator | 2026-03-26 05:29:16.346497 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-26 05:29:16.346503 | orchestrator | Thursday 26 March 2026 05:28:14 +0000 (0:00:01.442) 0:25:38.424 ******** 2026-03-26 05:29:16.346509 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:29:16.346516 | orchestrator | 2026-03-26 05:29:16.346522 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-03-26 05:29:16.346528 | orchestrator | Thursday 26 March 2026 05:28:16 +0000 (0:00:01.846) 0:25:40.271 ******** 2026-03-26 05:29:16.346534 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-26 05:29:16.346541 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-26 05:29:16.346548 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-26 05:29:16.346554 | orchestrator | 2026-03-26 05:29:16.346560 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-03-26 05:29:16.346566 | orchestrator | Thursday 26 March 2026 05:28:18 +0000 (0:00:01.689) 0:25:41.960 ******** 2026-03-26 05:29:16.346572 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0 2026-03-26 05:29:16.346578 | orchestrator | 2026-03-26 05:29:16.346584 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-03-26 05:29:16.346606 | orchestrator | Thursday 26 March 2026 05:28:19 +0000 (0:00:01.473) 0:25:43.434 ******** 2026-03-26 05:29:16.346613 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:29:16.346619 | orchestrator | 2026-03-26 05:29:16.346625 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-03-26 05:29:16.346631 | orchestrator | Thursday 26 March 2026 05:28:21 +0000 (0:00:01.533) 0:25:44.967 ******** 2026-03-26 05:29:16.346637 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:29:16.346643 | orchestrator | 2026-03-26 05:29:16.346650 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-03-26 05:29:16.346656 | orchestrator | Thursday 26 March 2026 05:28:22 +0000 (0:00:01.134) 0:25:46.102 ******** 2026-03-26 05:29:16.346662 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-03-26 05:29:16.346668 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-03-26 05:29:16.346675 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-03-26 05:29:16.346693 | orchestrator | ok: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-03-26 05:29:16.346699 | orchestrator | 2026-03-26 05:29:16.346706 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-03-26 05:29:16.346712 | orchestrator | Thursday 26 March 2026 05:28:30 +0000 (0:00:07.644) 0:25:53.746 ******** 2026-03-26 05:29:16.346718 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:29:16.346724 | orchestrator | 2026-03-26 05:29:16.346730 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-03-26 05:29:16.346736 | orchestrator | Thursday 26 March 2026 05:28:31 +0000 (0:00:01.177) 0:25:54.923 ******** 2026-03-26 05:29:16.346742 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-26 05:29:16.346749 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-26 05:29:16.346788 | orchestrator | 2026-03-26 05:29:16.346795 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-03-26 05:29:16.346801 | orchestrator | Thursday 26 March 2026 05:28:34 +0000 (0:00:03.200) 0:25:58.124 ******** 2026-03-26 05:29:16.346807 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-26 05:29:16.346813 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-03-26 05:29:16.346819 | orchestrator | 2026-03-26 05:29:16.346825 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-03-26 05:29:16.346831 | orchestrator | Thursday 26 March 2026 05:28:36 +0000 (0:00:02.001) 0:26:00.125 ******** 2026-03-26 05:29:16.346837 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:29:16.346844 | orchestrator | 2026-03-26 05:29:16.346850 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-03-26 05:29:16.346856 | orchestrator | Thursday 26 March 2026 05:28:38 +0000 (0:00:01.570) 0:26:01.696 ******** 2026-03-26 05:29:16.346862 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:29:16.346868 | orchestrator | 2026-03-26 05:29:16.346874 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-03-26 05:29:16.346880 | orchestrator | Thursday 26 March 2026 05:28:39 +0000 (0:00:01.205) 0:26:02.902 ******** 2026-03-26 05:29:16.346886 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:29:16.346892 | orchestrator | 2026-03-26 05:29:16.346899 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-03-26 05:29:16.346905 | orchestrator | Thursday 26 March 2026 05:28:40 +0000 (0:00:01.138) 0:26:04.040 ******** 2026-03-26 05:29:16.346911 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0 2026-03-26 05:29:16.346919 | orchestrator | 2026-03-26 05:29:16.346925 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-03-26 05:29:16.346932 | orchestrator | Thursday 26 March 2026 05:28:41 +0000 (0:00:01.469) 0:26:05.510 ******** 2026-03-26 05:29:16.346939 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:29:16.346946 | orchestrator | 2026-03-26 05:29:16.346953 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-03-26 05:29:16.346960 | orchestrator | Thursday 26 March 2026 05:28:43 +0000 (0:00:01.168) 0:26:06.679 ******** 2026-03-26 05:29:16.346974 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:29:16.346981 | orchestrator | 2026-03-26 05:29:16.346988 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-03-26 05:29:16.347008 | orchestrator | Thursday 26 March 2026 05:28:44 +0000 (0:00:01.120) 0:26:07.800 ******** 2026-03-26 05:29:16.347016 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0 2026-03-26 05:29:16.347023 | orchestrator | 2026-03-26 05:29:16.347030 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-03-26 05:29:16.347037 | orchestrator | Thursday 26 March 2026 05:28:45 +0000 (0:00:01.482) 0:26:09.283 ******** 2026-03-26 05:29:16.347044 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:29:16.347050 | orchestrator | 2026-03-26 05:29:16.347057 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-03-26 05:29:16.347064 | orchestrator | Thursday 26 March 2026 05:28:47 +0000 (0:00:02.048) 0:26:11.331 ******** 2026-03-26 05:29:16.347071 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:29:16.347078 | orchestrator | 2026-03-26 05:29:16.347085 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-03-26 05:29:16.347092 | orchestrator | Thursday 26 March 2026 05:28:50 +0000 (0:00:02.472) 0:26:13.803 ******** 2026-03-26 05:29:16.347099 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:29:16.347106 | orchestrator | 2026-03-26 05:29:16.347113 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-03-26 05:29:16.347120 | orchestrator | Thursday 26 March 2026 05:28:52 +0000 (0:00:02.585) 0:26:16.389 ******** 2026-03-26 05:29:16.347126 | orchestrator | changed: [testbed-node-0] 2026-03-26 05:29:16.347133 | orchestrator | 2026-03-26 05:29:16.347140 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-03-26 05:29:16.347147 | orchestrator | Thursday 26 March 2026 05:28:56 +0000 (0:00:03.817) 0:26:20.207 ******** 2026-03-26 05:29:16.347154 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:29:16.347161 | orchestrator | 2026-03-26 05:29:16.347168 | orchestrator | PLAY [Upgrade ceph mgr nodes] ************************************************** 2026-03-26 05:29:16.347175 | orchestrator | 2026-03-26 05:29:16.347182 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-03-26 05:29:16.347189 | orchestrator | Thursday 26 March 2026 05:28:57 +0000 (0:00:01.285) 0:26:21.492 ******** 2026-03-26 05:29:16.347195 | orchestrator | changed: [testbed-node-1] 2026-03-26 05:29:16.347201 | orchestrator | 2026-03-26 05:29:16.347207 | orchestrator | TASK [Mask ceph mgr systemd unit] ********************************************** 2026-03-26 05:29:16.347213 | orchestrator | Thursday 26 March 2026 05:29:00 +0000 (0:00:02.406) 0:26:23.899 ******** 2026-03-26 05:29:16.347219 | orchestrator | changed: [testbed-node-1] 2026-03-26 05:29:16.347225 | orchestrator | 2026-03-26 05:29:16.347231 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-26 05:29:16.347238 | orchestrator | Thursday 26 March 2026 05:29:02 +0000 (0:00:02.020) 0:26:25.919 ******** 2026-03-26 05:29:16.347244 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-1 2026-03-26 05:29:16.347250 | orchestrator | 2026-03-26 05:29:16.347256 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-26 05:29:16.347266 | orchestrator | Thursday 26 March 2026 05:29:03 +0000 (0:00:01.115) 0:26:27.035 ******** 2026-03-26 05:29:16.347272 | orchestrator | ok: [testbed-node-1] 2026-03-26 05:29:16.347278 | orchestrator | 2026-03-26 05:29:16.347285 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-26 05:29:16.347291 | orchestrator | Thursday 26 March 2026 05:29:04 +0000 (0:00:01.507) 0:26:28.542 ******** 2026-03-26 05:29:16.347297 | orchestrator | ok: [testbed-node-1] 2026-03-26 05:29:16.347303 | orchestrator | 2026-03-26 05:29:16.347309 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-26 05:29:16.347316 | orchestrator | Thursday 26 March 2026 05:29:06 +0000 (0:00:01.141) 0:26:29.683 ******** 2026-03-26 05:29:16.347322 | orchestrator | ok: [testbed-node-1] 2026-03-26 05:29:16.347328 | orchestrator | 2026-03-26 05:29:16.347338 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-26 05:29:16.347345 | orchestrator | Thursday 26 March 2026 05:29:07 +0000 (0:00:01.498) 0:26:31.182 ******** 2026-03-26 05:29:16.347351 | orchestrator | ok: [testbed-node-1] 2026-03-26 05:29:16.347357 | orchestrator | 2026-03-26 05:29:16.347363 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-26 05:29:16.347369 | orchestrator | Thursday 26 March 2026 05:29:08 +0000 (0:00:01.193) 0:26:32.376 ******** 2026-03-26 05:29:16.347375 | orchestrator | ok: [testbed-node-1] 2026-03-26 05:29:16.347381 | orchestrator | 2026-03-26 05:29:16.347388 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-26 05:29:16.347394 | orchestrator | Thursday 26 March 2026 05:29:09 +0000 (0:00:01.131) 0:26:33.508 ******** 2026-03-26 05:29:16.347400 | orchestrator | ok: [testbed-node-1] 2026-03-26 05:29:16.347406 | orchestrator | 2026-03-26 05:29:16.347412 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-26 05:29:16.347418 | orchestrator | Thursday 26 March 2026 05:29:10 +0000 (0:00:01.121) 0:26:34.630 ******** 2026-03-26 05:29:16.347424 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:29:16.347431 | orchestrator | 2026-03-26 05:29:16.347437 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-26 05:29:16.347443 | orchestrator | Thursday 26 March 2026 05:29:12 +0000 (0:00:01.212) 0:26:35.842 ******** 2026-03-26 05:29:16.347449 | orchestrator | ok: [testbed-node-1] 2026-03-26 05:29:16.347455 | orchestrator | 2026-03-26 05:29:16.347461 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-26 05:29:16.347467 | orchestrator | Thursday 26 March 2026 05:29:13 +0000 (0:00:01.130) 0:26:36.973 ******** 2026-03-26 05:29:16.347473 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-26 05:29:16.347479 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-26 05:29:16.347485 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-26 05:29:16.347492 | orchestrator | 2026-03-26 05:29:16.347498 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-26 05:29:16.347504 | orchestrator | Thursday 26 March 2026 05:29:15 +0000 (0:00:01.718) 0:26:38.692 ******** 2026-03-26 05:29:16.347510 | orchestrator | ok: [testbed-node-1] 2026-03-26 05:29:16.347516 | orchestrator | 2026-03-26 05:29:16.347522 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-26 05:29:16.347532 | orchestrator | Thursday 26 March 2026 05:29:16 +0000 (0:00:01.295) 0:26:39.988 ******** 2026-03-26 05:29:40.851527 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-26 05:29:40.851675 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-26 05:29:40.851702 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-26 05:29:40.851720 | orchestrator | 2026-03-26 05:29:40.851739 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-26 05:29:40.851758 | orchestrator | Thursday 26 March 2026 05:29:19 +0000 (0:00:02.878) 0:26:42.867 ******** 2026-03-26 05:29:40.851841 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-26 05:29:40.851864 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-26 05:29:40.851885 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-26 05:29:40.851906 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:29:40.851926 | orchestrator | 2026-03-26 05:29:40.851947 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-26 05:29:40.851968 | orchestrator | Thursday 26 March 2026 05:29:20 +0000 (0:00:01.387) 0:26:44.254 ******** 2026-03-26 05:29:40.851982 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-26 05:29:40.852026 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-26 05:29:40.852040 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-26 05:29:40.852053 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:29:40.852066 | orchestrator | 2026-03-26 05:29:40.852079 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-26 05:29:40.852092 | orchestrator | Thursday 26 March 2026 05:29:22 +0000 (0:00:01.674) 0:26:45.929 ******** 2026-03-26 05:29:40.852125 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-26 05:29:40.852142 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-26 05:29:40.852155 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-26 05:29:40.852169 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:29:40.852182 | orchestrator | 2026-03-26 05:29:40.852194 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-26 05:29:40.852206 | orchestrator | Thursday 26 March 2026 05:29:23 +0000 (0:00:01.170) 0:26:47.099 ******** 2026-03-26 05:29:40.852222 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': 'de9c3b4c4c57', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-26 05:29:16.907211', 'end': '2026-03-26 05:29:16.951545', 'delta': '0:00:00.044334', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['de9c3b4c4c57'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-26 05:29:40.852261 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': 'd66b87272f8e', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-26 05:29:17.432469', 'end': '2026-03-26 05:29:17.476218', 'delta': '0:00:00.043749', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['d66b87272f8e'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-26 05:29:40.852286 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': 'b850f8fd4697', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-26 05:29:17.941866', 'end': '2026-03-26 05:29:17.979790', 'delta': '0:00:00.037924', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['b850f8fd4697'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-26 05:29:40.852299 | orchestrator | 2026-03-26 05:29:40.852312 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-26 05:29:40.852324 | orchestrator | Thursday 26 March 2026 05:29:24 +0000 (0:00:01.215) 0:26:48.315 ******** 2026-03-26 05:29:40.852337 | orchestrator | ok: [testbed-node-1] 2026-03-26 05:29:40.852349 | orchestrator | 2026-03-26 05:29:40.852362 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-26 05:29:40.852374 | orchestrator | Thursday 26 March 2026 05:29:25 +0000 (0:00:01.296) 0:26:49.611 ******** 2026-03-26 05:29:40.852387 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:29:40.852399 | orchestrator | 2026-03-26 05:29:40.852410 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-26 05:29:40.852426 | orchestrator | Thursday 26 March 2026 05:29:27 +0000 (0:00:01.226) 0:26:50.838 ******** 2026-03-26 05:29:40.852437 | orchestrator | ok: [testbed-node-1] 2026-03-26 05:29:40.852448 | orchestrator | 2026-03-26 05:29:40.852458 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-26 05:29:40.852469 | orchestrator | Thursday 26 March 2026 05:29:28 +0000 (0:00:01.159) 0:26:51.997 ******** 2026-03-26 05:29:40.852480 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] 2026-03-26 05:29:40.852491 | orchestrator | 2026-03-26 05:29:40.852501 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-26 05:29:40.852512 | orchestrator | Thursday 26 March 2026 05:29:30 +0000 (0:00:02.059) 0:26:54.057 ******** 2026-03-26 05:29:40.852522 | orchestrator | ok: [testbed-node-1] 2026-03-26 05:29:40.852533 | orchestrator | 2026-03-26 05:29:40.852544 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-26 05:29:40.852554 | orchestrator | Thursday 26 March 2026 05:29:31 +0000 (0:00:01.115) 0:26:55.173 ******** 2026-03-26 05:29:40.852565 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:29:40.852576 | orchestrator | 2026-03-26 05:29:40.852586 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-26 05:29:40.852597 | orchestrator | Thursday 26 March 2026 05:29:32 +0000 (0:00:01.191) 0:26:56.364 ******** 2026-03-26 05:29:40.852607 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:29:40.852618 | orchestrator | 2026-03-26 05:29:40.852629 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-26 05:29:40.852640 | orchestrator | Thursday 26 March 2026 05:29:33 +0000 (0:00:01.258) 0:26:57.622 ******** 2026-03-26 05:29:40.852650 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:29:40.852661 | orchestrator | 2026-03-26 05:29:40.852671 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-26 05:29:40.852682 | orchestrator | Thursday 26 March 2026 05:29:35 +0000 (0:00:01.136) 0:26:58.759 ******** 2026-03-26 05:29:40.852693 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:29:40.852704 | orchestrator | 2026-03-26 05:29:40.852714 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-26 05:29:40.852725 | orchestrator | Thursday 26 March 2026 05:29:36 +0000 (0:00:01.140) 0:26:59.900 ******** 2026-03-26 05:29:40.852735 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:29:40.852746 | orchestrator | 2026-03-26 05:29:40.852756 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-26 05:29:40.852806 | orchestrator | Thursday 26 March 2026 05:29:37 +0000 (0:00:01.161) 0:27:01.062 ******** 2026-03-26 05:29:40.852818 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:29:40.852829 | orchestrator | 2026-03-26 05:29:40.852840 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-26 05:29:40.852851 | orchestrator | Thursday 26 March 2026 05:29:38 +0000 (0:00:01.142) 0:27:02.205 ******** 2026-03-26 05:29:40.852862 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:29:40.852872 | orchestrator | 2026-03-26 05:29:40.852883 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-26 05:29:40.852893 | orchestrator | Thursday 26 March 2026 05:29:39 +0000 (0:00:01.142) 0:27:03.347 ******** 2026-03-26 05:29:40.852904 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:29:40.852915 | orchestrator | 2026-03-26 05:29:40.852926 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-26 05:29:40.852944 | orchestrator | Thursday 26 March 2026 05:29:40 +0000 (0:00:01.148) 0:27:04.496 ******** 2026-03-26 05:29:44.557908 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:29:44.558100 | orchestrator | 2026-03-26 05:29:44.558123 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-26 05:29:44.558138 | orchestrator | Thursday 26 March 2026 05:29:41 +0000 (0:00:01.160) 0:27:05.657 ******** 2026-03-26 05:29:44.558152 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:29:44.558168 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:29:44.558179 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:29:44.558209 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-26-01-38-16-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-26 05:29:44.558225 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:29:44.558236 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:29:44.558269 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:29:44.558327 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2e41bcf9-ad92-42bb-b49e-289ca95def9f', 'scsi-SQEMU_QEMU_HARDDISK_2e41bcf9-ad92-42bb-b49e-289ca95def9f'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2e41bcf9', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2e41bcf9-ad92-42bb-b49e-289ca95def9f-part16', 'scsi-SQEMU_QEMU_HARDDISK_2e41bcf9-ad92-42bb-b49e-289ca95def9f-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2e41bcf9-ad92-42bb-b49e-289ca95def9f-part14', 'scsi-SQEMU_QEMU_HARDDISK_2e41bcf9-ad92-42bb-b49e-289ca95def9f-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2e41bcf9-ad92-42bb-b49e-289ca95def9f-part15', 'scsi-SQEMU_QEMU_HARDDISK_2e41bcf9-ad92-42bb-b49e-289ca95def9f-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2e41bcf9-ad92-42bb-b49e-289ca95def9f-part1', 'scsi-SQEMU_QEMU_HARDDISK_2e41bcf9-ad92-42bb-b49e-289ca95def9f-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-26 05:29:44.558357 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:29:44.558385 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:29:44.558403 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:29:44.558421 | orchestrator | 2026-03-26 05:29:44.558440 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-26 05:29:44.558459 | orchestrator | Thursday 26 March 2026 05:29:43 +0000 (0:00:01.294) 0:27:06.951 ******** 2026-03-26 05:29:44.558480 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:29:44.558524 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:29:44.558555 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:29:55.164147 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-26-01-38-16-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:29:55.164246 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:29:55.164272 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:29:55.164279 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:29:55.164321 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2e41bcf9-ad92-42bb-b49e-289ca95def9f', 'scsi-SQEMU_QEMU_HARDDISK_2e41bcf9-ad92-42bb-b49e-289ca95def9f'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2e41bcf9', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2e41bcf9-ad92-42bb-b49e-289ca95def9f-part16', 'scsi-SQEMU_QEMU_HARDDISK_2e41bcf9-ad92-42bb-b49e-289ca95def9f-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2e41bcf9-ad92-42bb-b49e-289ca95def9f-part14', 'scsi-SQEMU_QEMU_HARDDISK_2e41bcf9-ad92-42bb-b49e-289ca95def9f-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2e41bcf9-ad92-42bb-b49e-289ca95def9f-part15', 'scsi-SQEMU_QEMU_HARDDISK_2e41bcf9-ad92-42bb-b49e-289ca95def9f-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2e41bcf9-ad92-42bb-b49e-289ca95def9f-part1', 'scsi-SQEMU_QEMU_HARDDISK_2e41bcf9-ad92-42bb-b49e-289ca95def9f-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:29:55.164331 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:29:55.164341 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:29:55.164353 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:29:55.164362 | orchestrator | 2026-03-26 05:29:55.164370 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-26 05:29:55.164379 | orchestrator | Thursday 26 March 2026 05:29:44 +0000 (0:00:01.259) 0:27:08.210 ******** 2026-03-26 05:29:55.164386 | orchestrator | ok: [testbed-node-1] 2026-03-26 05:29:55.164394 | orchestrator | 2026-03-26 05:29:55.164401 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-26 05:29:55.164409 | orchestrator | Thursday 26 March 2026 05:29:46 +0000 (0:00:01.563) 0:27:09.774 ******** 2026-03-26 05:29:55.164416 | orchestrator | ok: [testbed-node-1] 2026-03-26 05:29:55.164423 | orchestrator | 2026-03-26 05:29:55.164430 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-26 05:29:55.164437 | orchestrator | Thursday 26 March 2026 05:29:47 +0000 (0:00:01.115) 0:27:10.889 ******** 2026-03-26 05:29:55.164444 | orchestrator | ok: [testbed-node-1] 2026-03-26 05:29:55.164451 | orchestrator | 2026-03-26 05:29:55.164458 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-26 05:29:55.164465 | orchestrator | Thursday 26 March 2026 05:29:48 +0000 (0:00:01.501) 0:27:12.390 ******** 2026-03-26 05:29:55.164472 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:29:55.164479 | orchestrator | 2026-03-26 05:29:55.164486 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-26 05:29:55.164493 | orchestrator | Thursday 26 March 2026 05:29:49 +0000 (0:00:01.148) 0:27:13.539 ******** 2026-03-26 05:29:55.164500 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:29:55.164507 | orchestrator | 2026-03-26 05:29:55.164514 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-26 05:29:55.164521 | orchestrator | Thursday 26 March 2026 05:29:51 +0000 (0:00:01.275) 0:27:14.815 ******** 2026-03-26 05:29:55.164528 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:29:55.164535 | orchestrator | 2026-03-26 05:29:55.164542 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-26 05:29:55.164549 | orchestrator | Thursday 26 March 2026 05:29:52 +0000 (0:00:01.166) 0:27:15.982 ******** 2026-03-26 05:29:55.164556 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-03-26 05:29:55.164563 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-26 05:29:55.164570 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-03-26 05:29:55.164577 | orchestrator | 2026-03-26 05:29:55.164584 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-26 05:29:55.164591 | orchestrator | Thursday 26 March 2026 05:29:53 +0000 (0:00:01.664) 0:27:17.646 ******** 2026-03-26 05:29:55.164598 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-26 05:29:55.164605 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-26 05:29:55.164612 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-26 05:29:55.164619 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:29:55.164626 | orchestrator | 2026-03-26 05:29:55.164637 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-26 05:30:31.426198 | orchestrator | Thursday 26 March 2026 05:29:55 +0000 (0:00:01.160) 0:27:18.807 ******** 2026-03-26 05:30:31.426320 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:30:31.426338 | orchestrator | 2026-03-26 05:30:31.426350 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-26 05:30:31.426361 | orchestrator | Thursday 26 March 2026 05:29:56 +0000 (0:00:01.127) 0:27:19.935 ******** 2026-03-26 05:30:31.426373 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-26 05:30:31.426385 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-26 05:30:31.426397 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-26 05:30:31.426408 | orchestrator | ok: [testbed-node-1 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-26 05:30:31.426447 | orchestrator | ok: [testbed-node-1 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-26 05:30:31.426459 | orchestrator | ok: [testbed-node-1 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-26 05:30:31.426470 | orchestrator | ok: [testbed-node-1 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-26 05:30:31.426481 | orchestrator | 2026-03-26 05:30:31.426491 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-26 05:30:31.426502 | orchestrator | Thursday 26 March 2026 05:29:58 +0000 (0:00:02.117) 0:27:22.052 ******** 2026-03-26 05:30:31.426513 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-26 05:30:31.426524 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-26 05:30:31.426534 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-26 05:30:31.426545 | orchestrator | ok: [testbed-node-1 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-26 05:30:31.426555 | orchestrator | ok: [testbed-node-1 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-26 05:30:31.426566 | orchestrator | ok: [testbed-node-1 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-26 05:30:31.426590 | orchestrator | ok: [testbed-node-1 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-26 05:30:31.426601 | orchestrator | 2026-03-26 05:30:31.426611 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-26 05:30:31.426622 | orchestrator | Thursday 26 March 2026 05:30:00 +0000 (0:00:02.294) 0:27:24.347 ******** 2026-03-26 05:30:31.426632 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-1 2026-03-26 05:30:31.426644 | orchestrator | 2026-03-26 05:30:31.426655 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-26 05:30:31.426666 | orchestrator | Thursday 26 March 2026 05:30:01 +0000 (0:00:01.124) 0:27:25.471 ******** 2026-03-26 05:30:31.426677 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-1 2026-03-26 05:30:31.426689 | orchestrator | 2026-03-26 05:30:31.426702 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-26 05:30:31.426714 | orchestrator | Thursday 26 March 2026 05:30:03 +0000 (0:00:01.317) 0:27:26.789 ******** 2026-03-26 05:30:31.426726 | orchestrator | ok: [testbed-node-1] 2026-03-26 05:30:31.426738 | orchestrator | 2026-03-26 05:30:31.426750 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-26 05:30:31.426761 | orchestrator | Thursday 26 March 2026 05:30:04 +0000 (0:00:01.521) 0:27:28.310 ******** 2026-03-26 05:30:31.426774 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:30:31.426786 | orchestrator | 2026-03-26 05:30:31.426830 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-26 05:30:31.426850 | orchestrator | Thursday 26 March 2026 05:30:05 +0000 (0:00:01.166) 0:27:29.477 ******** 2026-03-26 05:30:31.426868 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:30:31.426885 | orchestrator | 2026-03-26 05:30:31.426903 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-26 05:30:31.426921 | orchestrator | Thursday 26 March 2026 05:30:06 +0000 (0:00:01.083) 0:27:30.561 ******** 2026-03-26 05:30:31.426937 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:30:31.426956 | orchestrator | 2026-03-26 05:30:31.426975 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-26 05:30:31.426994 | orchestrator | Thursday 26 March 2026 05:30:07 +0000 (0:00:01.080) 0:27:31.641 ******** 2026-03-26 05:30:31.427012 | orchestrator | ok: [testbed-node-1] 2026-03-26 05:30:31.427027 | orchestrator | 2026-03-26 05:30:31.427038 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-26 05:30:31.427048 | orchestrator | Thursday 26 March 2026 05:30:09 +0000 (0:00:01.550) 0:27:33.192 ******** 2026-03-26 05:30:31.427059 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:30:31.427081 | orchestrator | 2026-03-26 05:30:31.427092 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-26 05:30:31.427103 | orchestrator | Thursday 26 March 2026 05:30:10 +0000 (0:00:01.089) 0:27:34.281 ******** 2026-03-26 05:30:31.427113 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:30:31.427124 | orchestrator | 2026-03-26 05:30:31.427135 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-26 05:30:31.427146 | orchestrator | Thursday 26 March 2026 05:30:11 +0000 (0:00:01.108) 0:27:35.390 ******** 2026-03-26 05:30:31.427156 | orchestrator | ok: [testbed-node-1] 2026-03-26 05:30:31.427167 | orchestrator | 2026-03-26 05:30:31.427178 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-26 05:30:31.427188 | orchestrator | Thursday 26 March 2026 05:30:13 +0000 (0:00:01.582) 0:27:36.972 ******** 2026-03-26 05:30:31.427199 | orchestrator | ok: [testbed-node-1] 2026-03-26 05:30:31.427210 | orchestrator | 2026-03-26 05:30:31.427221 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-26 05:30:31.427251 | orchestrator | Thursday 26 March 2026 05:30:14 +0000 (0:00:01.528) 0:27:38.500 ******** 2026-03-26 05:30:31.427262 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:30:31.427273 | orchestrator | 2026-03-26 05:30:31.427283 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-26 05:30:31.427294 | orchestrator | Thursday 26 March 2026 05:30:15 +0000 (0:00:00.756) 0:27:39.257 ******** 2026-03-26 05:30:31.427304 | orchestrator | ok: [testbed-node-1] 2026-03-26 05:30:31.427315 | orchestrator | 2026-03-26 05:30:31.427326 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-26 05:30:31.427336 | orchestrator | Thursday 26 March 2026 05:30:16 +0000 (0:00:00.834) 0:27:40.091 ******** 2026-03-26 05:30:31.427347 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:30:31.427357 | orchestrator | 2026-03-26 05:30:31.427368 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-26 05:30:31.427378 | orchestrator | Thursday 26 March 2026 05:30:17 +0000 (0:00:00.774) 0:27:40.865 ******** 2026-03-26 05:30:31.427389 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:30:31.427399 | orchestrator | 2026-03-26 05:30:31.427410 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-26 05:30:31.427420 | orchestrator | Thursday 26 March 2026 05:30:17 +0000 (0:00:00.756) 0:27:41.622 ******** 2026-03-26 05:30:31.427431 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:30:31.427442 | orchestrator | 2026-03-26 05:30:31.427452 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-26 05:30:31.427462 | orchestrator | Thursday 26 March 2026 05:30:18 +0000 (0:00:00.810) 0:27:42.432 ******** 2026-03-26 05:30:31.427473 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:30:31.427483 | orchestrator | 2026-03-26 05:30:31.427502 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-26 05:30:31.427520 | orchestrator | Thursday 26 March 2026 05:30:19 +0000 (0:00:00.836) 0:27:43.269 ******** 2026-03-26 05:30:31.427539 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:30:31.427557 | orchestrator | 2026-03-26 05:30:31.427573 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-26 05:30:31.427589 | orchestrator | Thursday 26 March 2026 05:30:20 +0000 (0:00:00.763) 0:27:44.032 ******** 2026-03-26 05:30:31.427607 | orchestrator | ok: [testbed-node-1] 2026-03-26 05:30:31.427626 | orchestrator | 2026-03-26 05:30:31.427645 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-26 05:30:31.427672 | orchestrator | Thursday 26 March 2026 05:30:21 +0000 (0:00:00.823) 0:27:44.856 ******** 2026-03-26 05:30:31.427684 | orchestrator | ok: [testbed-node-1] 2026-03-26 05:30:31.427695 | orchestrator | 2026-03-26 05:30:31.427706 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-26 05:30:31.427716 | orchestrator | Thursday 26 March 2026 05:30:21 +0000 (0:00:00.788) 0:27:45.645 ******** 2026-03-26 05:30:31.427727 | orchestrator | ok: [testbed-node-1] 2026-03-26 05:30:31.427746 | orchestrator | 2026-03-26 05:30:31.427757 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-03-26 05:30:31.427767 | orchestrator | Thursday 26 March 2026 05:30:22 +0000 (0:00:00.828) 0:27:46.474 ******** 2026-03-26 05:30:31.427778 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:30:31.427789 | orchestrator | 2026-03-26 05:30:31.427827 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-03-26 05:30:31.427839 | orchestrator | Thursday 26 March 2026 05:30:23 +0000 (0:00:00.808) 0:27:47.282 ******** 2026-03-26 05:30:31.427849 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:30:31.427860 | orchestrator | 2026-03-26 05:30:31.427871 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-03-26 05:30:31.427881 | orchestrator | Thursday 26 March 2026 05:30:24 +0000 (0:00:00.780) 0:27:48.063 ******** 2026-03-26 05:30:31.427892 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:30:31.427903 | orchestrator | 2026-03-26 05:30:31.427913 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-03-26 05:30:31.427924 | orchestrator | Thursday 26 March 2026 05:30:25 +0000 (0:00:00.780) 0:27:48.843 ******** 2026-03-26 05:30:31.427935 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:30:31.427945 | orchestrator | 2026-03-26 05:30:31.427956 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-03-26 05:30:31.427966 | orchestrator | Thursday 26 March 2026 05:30:25 +0000 (0:00:00.789) 0:27:49.633 ******** 2026-03-26 05:30:31.427977 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:30:31.427988 | orchestrator | 2026-03-26 05:30:31.427998 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-03-26 05:30:31.428009 | orchestrator | Thursday 26 March 2026 05:30:26 +0000 (0:00:00.777) 0:27:50.410 ******** 2026-03-26 05:30:31.428019 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:30:31.428030 | orchestrator | 2026-03-26 05:30:31.428041 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-03-26 05:30:31.428051 | orchestrator | Thursday 26 March 2026 05:30:27 +0000 (0:00:00.784) 0:27:51.195 ******** 2026-03-26 05:30:31.428062 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:30:31.428072 | orchestrator | 2026-03-26 05:30:31.428083 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-03-26 05:30:31.428094 | orchestrator | Thursday 26 March 2026 05:30:28 +0000 (0:00:00.825) 0:27:52.020 ******** 2026-03-26 05:30:31.428105 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:30:31.428115 | orchestrator | 2026-03-26 05:30:31.428126 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-03-26 05:30:31.428136 | orchestrator | Thursday 26 March 2026 05:30:29 +0000 (0:00:00.765) 0:27:52.786 ******** 2026-03-26 05:30:31.428147 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:30:31.428158 | orchestrator | 2026-03-26 05:30:31.428168 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-03-26 05:30:31.428179 | orchestrator | Thursday 26 March 2026 05:30:29 +0000 (0:00:00.747) 0:27:53.533 ******** 2026-03-26 05:30:31.428189 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:30:31.428200 | orchestrator | 2026-03-26 05:30:31.428211 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-03-26 05:30:31.428221 | orchestrator | Thursday 26 March 2026 05:30:30 +0000 (0:00:00.772) 0:27:54.306 ******** 2026-03-26 05:30:31.428232 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:30:31.428243 | orchestrator | 2026-03-26 05:30:31.428262 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-03-26 05:31:16.383369 | orchestrator | Thursday 26 March 2026 05:30:31 +0000 (0:00:00.765) 0:27:55.072 ******** 2026-03-26 05:31:16.383491 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:31:16.383509 | orchestrator | 2026-03-26 05:31:16.383577 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-26 05:31:16.383591 | orchestrator | Thursday 26 March 2026 05:30:32 +0000 (0:00:00.815) 0:27:55.887 ******** 2026-03-26 05:31:16.383627 | orchestrator | ok: [testbed-node-1] 2026-03-26 05:31:16.383667 | orchestrator | 2026-03-26 05:31:16.383688 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-26 05:31:16.383700 | orchestrator | Thursday 26 March 2026 05:30:33 +0000 (0:00:01.567) 0:27:57.455 ******** 2026-03-26 05:31:16.383711 | orchestrator | ok: [testbed-node-1] 2026-03-26 05:31:16.383721 | orchestrator | 2026-03-26 05:31:16.383732 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-26 05:31:16.383743 | orchestrator | Thursday 26 March 2026 05:30:35 +0000 (0:00:01.940) 0:27:59.395 ******** 2026-03-26 05:31:16.383753 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-1 2026-03-26 05:31:16.383765 | orchestrator | 2026-03-26 05:31:16.383776 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-26 05:31:16.383786 | orchestrator | Thursday 26 March 2026 05:30:36 +0000 (0:00:01.097) 0:28:00.493 ******** 2026-03-26 05:31:16.383797 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:31:16.383808 | orchestrator | 2026-03-26 05:31:16.383818 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-26 05:31:16.383829 | orchestrator | Thursday 26 March 2026 05:30:38 +0000 (0:00:01.191) 0:28:01.685 ******** 2026-03-26 05:31:16.383839 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:31:16.383850 | orchestrator | 2026-03-26 05:31:16.383861 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-26 05:31:16.383872 | orchestrator | Thursday 26 March 2026 05:30:39 +0000 (0:00:01.120) 0:28:02.806 ******** 2026-03-26 05:31:16.383883 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-26 05:31:16.383896 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-26 05:31:16.383910 | orchestrator | 2026-03-26 05:31:16.383936 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-26 05:31:16.383949 | orchestrator | Thursday 26 March 2026 05:30:40 +0000 (0:00:01.829) 0:28:04.636 ******** 2026-03-26 05:31:16.383962 | orchestrator | ok: [testbed-node-1] 2026-03-26 05:31:16.383974 | orchestrator | 2026-03-26 05:31:16.383986 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-26 05:31:16.383998 | orchestrator | Thursday 26 March 2026 05:30:42 +0000 (0:00:01.533) 0:28:06.169 ******** 2026-03-26 05:31:16.384009 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:31:16.384019 | orchestrator | 2026-03-26 05:31:16.384030 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-26 05:31:16.384041 | orchestrator | Thursday 26 March 2026 05:30:43 +0000 (0:00:01.186) 0:28:07.356 ******** 2026-03-26 05:31:16.384051 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:31:16.384062 | orchestrator | 2026-03-26 05:31:16.384072 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-26 05:31:16.384083 | orchestrator | Thursday 26 March 2026 05:30:44 +0000 (0:00:00.764) 0:28:08.120 ******** 2026-03-26 05:31:16.384093 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:31:16.384104 | orchestrator | 2026-03-26 05:31:16.384114 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-26 05:31:16.384125 | orchestrator | Thursday 26 March 2026 05:30:45 +0000 (0:00:00.795) 0:28:08.916 ******** 2026-03-26 05:31:16.384136 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-1 2026-03-26 05:31:16.384146 | orchestrator | 2026-03-26 05:31:16.384157 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-26 05:31:16.384167 | orchestrator | Thursday 26 March 2026 05:30:46 +0000 (0:00:01.216) 0:28:10.133 ******** 2026-03-26 05:31:16.384178 | orchestrator | ok: [testbed-node-1] 2026-03-26 05:31:16.384188 | orchestrator | 2026-03-26 05:31:16.384199 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-26 05:31:16.384210 | orchestrator | Thursday 26 March 2026 05:30:48 +0000 (0:00:01.843) 0:28:11.977 ******** 2026-03-26 05:31:16.384230 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-26 05:31:16.384249 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-26 05:31:16.384267 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-26 05:31:16.384286 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:31:16.384303 | orchestrator | 2026-03-26 05:31:16.384320 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-26 05:31:16.384337 | orchestrator | Thursday 26 March 2026 05:30:49 +0000 (0:00:01.130) 0:28:13.107 ******** 2026-03-26 05:31:16.384354 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:31:16.384374 | orchestrator | 2026-03-26 05:31:16.384392 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-26 05:31:16.384411 | orchestrator | Thursday 26 March 2026 05:30:50 +0000 (0:00:01.157) 0:28:14.265 ******** 2026-03-26 05:31:16.384429 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:31:16.384443 | orchestrator | 2026-03-26 05:31:16.384454 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-26 05:31:16.384464 | orchestrator | Thursday 26 March 2026 05:30:51 +0000 (0:00:01.189) 0:28:15.455 ******** 2026-03-26 05:31:16.384475 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:31:16.384485 | orchestrator | 2026-03-26 05:31:16.384496 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-26 05:31:16.384506 | orchestrator | Thursday 26 March 2026 05:30:52 +0000 (0:00:01.150) 0:28:16.606 ******** 2026-03-26 05:31:16.384517 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:31:16.384527 | orchestrator | 2026-03-26 05:31:16.384558 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-26 05:31:16.384570 | orchestrator | Thursday 26 March 2026 05:30:54 +0000 (0:00:01.164) 0:28:17.770 ******** 2026-03-26 05:31:16.384580 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:31:16.384591 | orchestrator | 2026-03-26 05:31:16.384602 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-26 05:31:16.384612 | orchestrator | Thursday 26 March 2026 05:30:54 +0000 (0:00:00.777) 0:28:18.548 ******** 2026-03-26 05:31:16.384623 | orchestrator | ok: [testbed-node-1] 2026-03-26 05:31:16.384633 | orchestrator | 2026-03-26 05:31:16.384689 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-26 05:31:16.384701 | orchestrator | Thursday 26 March 2026 05:30:57 +0000 (0:00:02.187) 0:28:20.735 ******** 2026-03-26 05:31:16.384712 | orchestrator | ok: [testbed-node-1] 2026-03-26 05:31:16.384723 | orchestrator | 2026-03-26 05:31:16.384733 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-26 05:31:16.384744 | orchestrator | Thursday 26 March 2026 05:30:57 +0000 (0:00:00.812) 0:28:21.548 ******** 2026-03-26 05:31:16.384754 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-1 2026-03-26 05:31:16.384765 | orchestrator | 2026-03-26 05:31:16.384775 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-26 05:31:16.384786 | orchestrator | Thursday 26 March 2026 05:30:59 +0000 (0:00:01.166) 0:28:22.715 ******** 2026-03-26 05:31:16.384796 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:31:16.384807 | orchestrator | 2026-03-26 05:31:16.384818 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-26 05:31:16.384828 | orchestrator | Thursday 26 March 2026 05:31:00 +0000 (0:00:01.135) 0:28:23.850 ******** 2026-03-26 05:31:16.384839 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:31:16.384849 | orchestrator | 2026-03-26 05:31:16.384860 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-26 05:31:16.384870 | orchestrator | Thursday 26 March 2026 05:31:01 +0000 (0:00:01.143) 0:28:24.994 ******** 2026-03-26 05:31:16.384881 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:31:16.384892 | orchestrator | 2026-03-26 05:31:16.384902 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-26 05:31:16.384920 | orchestrator | Thursday 26 March 2026 05:31:02 +0000 (0:00:01.145) 0:28:26.140 ******** 2026-03-26 05:31:16.384940 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:31:16.384951 | orchestrator | 2026-03-26 05:31:16.384961 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-26 05:31:16.384972 | orchestrator | Thursday 26 March 2026 05:31:03 +0000 (0:00:01.120) 0:28:27.261 ******** 2026-03-26 05:31:16.384983 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:31:16.384993 | orchestrator | 2026-03-26 05:31:16.385013 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-26 05:31:16.385033 | orchestrator | Thursday 26 March 2026 05:31:04 +0000 (0:00:01.113) 0:28:28.374 ******** 2026-03-26 05:31:16.385052 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:31:16.385071 | orchestrator | 2026-03-26 05:31:16.385090 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-26 05:31:16.385110 | orchestrator | Thursday 26 March 2026 05:31:05 +0000 (0:00:01.153) 0:28:29.528 ******** 2026-03-26 05:31:16.385128 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:31:16.385148 | orchestrator | 2026-03-26 05:31:16.385168 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-26 05:31:16.385189 | orchestrator | Thursday 26 March 2026 05:31:06 +0000 (0:00:01.126) 0:28:30.654 ******** 2026-03-26 05:31:16.385209 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:31:16.385229 | orchestrator | 2026-03-26 05:31:16.385242 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-26 05:31:16.385253 | orchestrator | Thursday 26 March 2026 05:31:08 +0000 (0:00:01.164) 0:28:31.819 ******** 2026-03-26 05:31:16.385264 | orchestrator | ok: [testbed-node-1] 2026-03-26 05:31:16.385274 | orchestrator | 2026-03-26 05:31:16.385285 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-26 05:31:16.385295 | orchestrator | Thursday 26 March 2026 05:31:09 +0000 (0:00:00.941) 0:28:32.761 ******** 2026-03-26 05:31:16.385305 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-1 2026-03-26 05:31:16.385316 | orchestrator | 2026-03-26 05:31:16.385327 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-26 05:31:16.385338 | orchestrator | Thursday 26 March 2026 05:31:10 +0000 (0:00:01.112) 0:28:33.874 ******** 2026-03-26 05:31:16.385348 | orchestrator | ok: [testbed-node-1] => (item=/etc/ceph) 2026-03-26 05:31:16.385359 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/) 2026-03-26 05:31:16.385370 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-03-26 05:31:16.385380 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-03-26 05:31:16.385391 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-03-26 05:31:16.385401 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-03-26 05:31:16.385414 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-03-26 05:31:16.385433 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-03-26 05:31:16.385451 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-26 05:31:16.385469 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-26 05:31:16.385486 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-26 05:31:16.385504 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-26 05:31:16.385522 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-26 05:31:16.385541 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-26 05:31:16.385559 | orchestrator | ok: [testbed-node-1] => (item=/var/run/ceph) 2026-03-26 05:31:16.385578 | orchestrator | ok: [testbed-node-1] => (item=/var/log/ceph) 2026-03-26 05:31:16.385596 | orchestrator | 2026-03-26 05:31:16.385624 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-26 05:31:57.684143 | orchestrator | Thursday 26 March 2026 05:31:16 +0000 (0:00:06.138) 0:28:40.013 ******** 2026-03-26 05:31:57.684281 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:31:57.684299 | orchestrator | 2026-03-26 05:31:57.684312 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-26 05:31:57.684324 | orchestrator | Thursday 26 March 2026 05:31:17 +0000 (0:00:00.765) 0:28:40.779 ******** 2026-03-26 05:31:57.684335 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:31:57.684346 | orchestrator | 2026-03-26 05:31:57.684357 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-26 05:31:57.684370 | orchestrator | Thursday 26 March 2026 05:31:17 +0000 (0:00:00.768) 0:28:41.547 ******** 2026-03-26 05:31:57.684381 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:31:57.684392 | orchestrator | 2026-03-26 05:31:57.684403 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-26 05:31:57.684414 | orchestrator | Thursday 26 March 2026 05:31:18 +0000 (0:00:00.801) 0:28:42.348 ******** 2026-03-26 05:31:57.684425 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:31:57.684436 | orchestrator | 2026-03-26 05:31:57.684447 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-26 05:31:57.684458 | orchestrator | Thursday 26 March 2026 05:31:19 +0000 (0:00:00.784) 0:28:43.133 ******** 2026-03-26 05:31:57.684469 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:31:57.684531 | orchestrator | 2026-03-26 05:31:57.684543 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-26 05:31:57.684554 | orchestrator | Thursday 26 March 2026 05:31:20 +0000 (0:00:00.771) 0:28:43.905 ******** 2026-03-26 05:31:57.684565 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:31:57.684575 | orchestrator | 2026-03-26 05:31:57.684586 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-26 05:31:57.684597 | orchestrator | Thursday 26 March 2026 05:31:21 +0000 (0:00:00.787) 0:28:44.693 ******** 2026-03-26 05:31:57.684608 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:31:57.684619 | orchestrator | 2026-03-26 05:31:57.684629 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-26 05:31:57.684656 | orchestrator | Thursday 26 March 2026 05:31:21 +0000 (0:00:00.750) 0:28:45.443 ******** 2026-03-26 05:31:57.684667 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:31:57.684678 | orchestrator | 2026-03-26 05:31:57.684690 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-26 05:31:57.684703 | orchestrator | Thursday 26 March 2026 05:31:22 +0000 (0:00:00.821) 0:28:46.265 ******** 2026-03-26 05:31:57.684715 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:31:57.684727 | orchestrator | 2026-03-26 05:31:57.684739 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-26 05:31:57.684751 | orchestrator | Thursday 26 March 2026 05:31:23 +0000 (0:00:00.844) 0:28:47.110 ******** 2026-03-26 05:31:57.684763 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:31:57.684775 | orchestrator | 2026-03-26 05:31:57.684788 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-26 05:31:57.684801 | orchestrator | Thursday 26 March 2026 05:31:24 +0000 (0:00:00.797) 0:28:47.907 ******** 2026-03-26 05:31:57.684813 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:31:57.684825 | orchestrator | 2026-03-26 05:31:57.684838 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-26 05:31:57.684850 | orchestrator | Thursday 26 March 2026 05:31:25 +0000 (0:00:00.772) 0:28:48.680 ******** 2026-03-26 05:31:57.684862 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:31:57.684875 | orchestrator | 2026-03-26 05:31:57.684887 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-26 05:31:57.684899 | orchestrator | Thursday 26 March 2026 05:31:25 +0000 (0:00:00.766) 0:28:49.447 ******** 2026-03-26 05:31:57.684909 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:31:57.684920 | orchestrator | 2026-03-26 05:31:57.684930 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-26 05:31:57.684950 | orchestrator | Thursday 26 March 2026 05:31:26 +0000 (0:00:00.900) 0:28:50.347 ******** 2026-03-26 05:31:57.684961 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:31:57.684971 | orchestrator | 2026-03-26 05:31:57.684982 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-26 05:31:57.684993 | orchestrator | Thursday 26 March 2026 05:31:27 +0000 (0:00:00.764) 0:28:51.112 ******** 2026-03-26 05:31:57.685003 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:31:57.685013 | orchestrator | 2026-03-26 05:31:57.685024 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-26 05:31:57.685034 | orchestrator | Thursday 26 March 2026 05:31:28 +0000 (0:00:00.869) 0:28:51.981 ******** 2026-03-26 05:31:57.685045 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:31:57.685055 | orchestrator | 2026-03-26 05:31:57.685066 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-26 05:31:57.685077 | orchestrator | Thursday 26 March 2026 05:31:29 +0000 (0:00:00.761) 0:28:52.743 ******** 2026-03-26 05:31:57.685087 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:31:57.685098 | orchestrator | 2026-03-26 05:31:57.685109 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-26 05:31:57.685120 | orchestrator | Thursday 26 March 2026 05:31:29 +0000 (0:00:00.772) 0:28:53.515 ******** 2026-03-26 05:31:57.685131 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:31:57.685142 | orchestrator | 2026-03-26 05:31:57.685152 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-26 05:31:57.685163 | orchestrator | Thursday 26 March 2026 05:31:30 +0000 (0:00:00.821) 0:28:54.337 ******** 2026-03-26 05:31:57.685173 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:31:57.685184 | orchestrator | 2026-03-26 05:31:57.685195 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-26 05:31:57.685205 | orchestrator | Thursday 26 March 2026 05:31:31 +0000 (0:00:00.844) 0:28:55.181 ******** 2026-03-26 05:31:57.685216 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:31:57.685226 | orchestrator | 2026-03-26 05:31:57.685255 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-26 05:31:57.685266 | orchestrator | Thursday 26 March 2026 05:31:32 +0000 (0:00:00.826) 0:28:56.008 ******** 2026-03-26 05:31:57.685277 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:31:57.685287 | orchestrator | 2026-03-26 05:31:57.685298 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-26 05:31:57.685308 | orchestrator | Thursday 26 March 2026 05:31:33 +0000 (0:00:00.795) 0:28:56.804 ******** 2026-03-26 05:31:57.685319 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-03-26 05:31:57.685329 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-03-26 05:31:57.685340 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-03-26 05:31:57.685350 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:31:57.685361 | orchestrator | 2026-03-26 05:31:57.685379 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-26 05:31:57.685396 | orchestrator | Thursday 26 March 2026 05:31:34 +0000 (0:00:01.387) 0:28:58.192 ******** 2026-03-26 05:31:57.685407 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-03-26 05:31:57.685417 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-03-26 05:31:57.685428 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-03-26 05:31:57.685439 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:31:57.685449 | orchestrator | 2026-03-26 05:31:57.685460 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-26 05:31:57.685470 | orchestrator | Thursday 26 March 2026 05:31:36 +0000 (0:00:01.467) 0:28:59.659 ******** 2026-03-26 05:31:57.685524 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-03-26 05:31:57.685544 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-03-26 05:31:57.685562 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-03-26 05:31:57.685593 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:31:57.685611 | orchestrator | 2026-03-26 05:31:57.685628 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-26 05:31:57.685646 | orchestrator | Thursday 26 March 2026 05:31:37 +0000 (0:00:01.071) 0:29:00.731 ******** 2026-03-26 05:31:57.685657 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:31:57.685737 | orchestrator | 2026-03-26 05:31:57.685753 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-26 05:31:57.685763 | orchestrator | Thursday 26 March 2026 05:31:37 +0000 (0:00:00.773) 0:29:01.504 ******** 2026-03-26 05:31:57.685775 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-03-26 05:31:57.685785 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:31:57.685796 | orchestrator | 2026-03-26 05:31:57.685807 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-26 05:31:57.685817 | orchestrator | Thursday 26 March 2026 05:31:38 +0000 (0:00:00.894) 0:29:02.399 ******** 2026-03-26 05:31:57.685828 | orchestrator | ok: [testbed-node-1] 2026-03-26 05:31:57.685840 | orchestrator | 2026-03-26 05:31:57.685850 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-03-26 05:31:57.685861 | orchestrator | Thursday 26 March 2026 05:31:40 +0000 (0:00:01.430) 0:29:03.829 ******** 2026-03-26 05:31:57.685872 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-26 05:31:57.685883 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-26 05:31:57.685894 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-26 05:31:57.685905 | orchestrator | 2026-03-26 05:31:57.685915 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-03-26 05:31:57.685927 | orchestrator | Thursday 26 March 2026 05:31:41 +0000 (0:00:01.332) 0:29:05.162 ******** 2026-03-26 05:31:57.685937 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-1 2026-03-26 05:31:57.685948 | orchestrator | 2026-03-26 05:31:57.685959 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-03-26 05:31:57.685969 | orchestrator | Thursday 26 March 2026 05:31:42 +0000 (0:00:01.152) 0:29:06.314 ******** 2026-03-26 05:31:57.685980 | orchestrator | ok: [testbed-node-1] 2026-03-26 05:31:57.685991 | orchestrator | 2026-03-26 05:31:57.686001 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-03-26 05:31:57.686074 | orchestrator | Thursday 26 March 2026 05:31:44 +0000 (0:00:01.495) 0:29:07.809 ******** 2026-03-26 05:31:57.686087 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:31:57.686098 | orchestrator | 2026-03-26 05:31:57.686109 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-03-26 05:31:57.686119 | orchestrator | Thursday 26 March 2026 05:31:45 +0000 (0:00:01.158) 0:29:08.968 ******** 2026-03-26 05:31:57.686130 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-26 05:31:57.686141 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-26 05:31:57.686151 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-26 05:31:57.686162 | orchestrator | ok: [testbed-node-1 -> {{ groups[mon_group_name][0] }}] 2026-03-26 05:31:57.686173 | orchestrator | 2026-03-26 05:31:57.686183 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-03-26 05:31:57.686194 | orchestrator | Thursday 26 March 2026 05:31:53 +0000 (0:00:07.848) 0:29:16.817 ******** 2026-03-26 05:31:57.686205 | orchestrator | ok: [testbed-node-1] 2026-03-26 05:31:57.686215 | orchestrator | 2026-03-26 05:31:57.686226 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-03-26 05:31:57.686237 | orchestrator | Thursday 26 March 2026 05:31:54 +0000 (0:00:01.260) 0:29:18.078 ******** 2026-03-26 05:31:57.686247 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-26 05:31:57.686258 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-26 05:31:57.686278 | orchestrator | 2026-03-26 05:31:57.686300 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-03-26 05:32:44.242297 | orchestrator | Thursday 26 March 2026 05:31:57 +0000 (0:00:03.250) 0:29:21.328 ******** 2026-03-26 05:32:44.242475 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-26 05:32:44.242491 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-03-26 05:32:44.242505 | orchestrator | 2026-03-26 05:32:44.242525 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-03-26 05:32:44.242543 | orchestrator | Thursday 26 March 2026 05:31:59 +0000 (0:00:02.080) 0:29:23.409 ******** 2026-03-26 05:32:44.242554 | orchestrator | ok: [testbed-node-1] 2026-03-26 05:32:44.242565 | orchestrator | 2026-03-26 05:32:44.242576 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-03-26 05:32:44.242587 | orchestrator | Thursday 26 March 2026 05:32:01 +0000 (0:00:01.496) 0:29:24.905 ******** 2026-03-26 05:32:44.242598 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:32:44.242609 | orchestrator | 2026-03-26 05:32:44.242620 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-03-26 05:32:44.242631 | orchestrator | Thursday 26 March 2026 05:32:02 +0000 (0:00:00.755) 0:29:25.660 ******** 2026-03-26 05:32:44.242642 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:32:44.242653 | orchestrator | 2026-03-26 05:32:44.242664 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-03-26 05:32:44.242674 | orchestrator | Thursday 26 March 2026 05:32:02 +0000 (0:00:00.758) 0:29:26.418 ******** 2026-03-26 05:32:44.242685 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-1 2026-03-26 05:32:44.242697 | orchestrator | 2026-03-26 05:32:44.242707 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-03-26 05:32:44.242718 | orchestrator | Thursday 26 March 2026 05:32:03 +0000 (0:00:01.114) 0:29:27.533 ******** 2026-03-26 05:32:44.242729 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:32:44.242740 | orchestrator | 2026-03-26 05:32:44.242751 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-03-26 05:32:44.242762 | orchestrator | Thursday 26 March 2026 05:32:04 +0000 (0:00:01.122) 0:29:28.655 ******** 2026-03-26 05:32:44.242772 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:32:44.242783 | orchestrator | 2026-03-26 05:32:44.242810 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-03-26 05:32:44.242821 | orchestrator | Thursday 26 March 2026 05:32:06 +0000 (0:00:01.116) 0:29:29.772 ******** 2026-03-26 05:32:44.242833 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-1 2026-03-26 05:32:44.242845 | orchestrator | 2026-03-26 05:32:44.242858 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-03-26 05:32:44.242870 | orchestrator | Thursday 26 March 2026 05:32:07 +0000 (0:00:01.096) 0:29:30.868 ******** 2026-03-26 05:32:44.242882 | orchestrator | ok: [testbed-node-1] 2026-03-26 05:32:44.242895 | orchestrator | 2026-03-26 05:32:44.242907 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-03-26 05:32:44.242919 | orchestrator | Thursday 26 March 2026 05:32:09 +0000 (0:00:02.028) 0:29:32.897 ******** 2026-03-26 05:32:44.242932 | orchestrator | ok: [testbed-node-1] 2026-03-26 05:32:44.242944 | orchestrator | 2026-03-26 05:32:44.242956 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-03-26 05:32:44.242969 | orchestrator | Thursday 26 March 2026 05:32:11 +0000 (0:00:01.977) 0:29:34.874 ******** 2026-03-26 05:32:44.242981 | orchestrator | ok: [testbed-node-1] 2026-03-26 05:32:44.242993 | orchestrator | 2026-03-26 05:32:44.243006 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-03-26 05:32:44.243019 | orchestrator | Thursday 26 March 2026 05:32:13 +0000 (0:00:02.354) 0:29:37.229 ******** 2026-03-26 05:32:44.243031 | orchestrator | changed: [testbed-node-1] 2026-03-26 05:32:44.243044 | orchestrator | 2026-03-26 05:32:44.243055 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-03-26 05:32:44.243090 | orchestrator | Thursday 26 March 2026 05:32:16 +0000 (0:00:03.301) 0:29:40.531 ******** 2026-03-26 05:32:44.243103 | orchestrator | skipping: [testbed-node-1] 2026-03-26 05:32:44.243116 | orchestrator | 2026-03-26 05:32:44.243128 | orchestrator | PLAY [Upgrade ceph mgr nodes] ************************************************** 2026-03-26 05:32:44.243141 | orchestrator | 2026-03-26 05:32:44.243153 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-03-26 05:32:44.243165 | orchestrator | Thursday 26 March 2026 05:32:17 +0000 (0:00:01.014) 0:29:41.546 ******** 2026-03-26 05:32:44.243178 | orchestrator | changed: [testbed-node-2] 2026-03-26 05:32:44.243190 | orchestrator | 2026-03-26 05:32:44.243203 | orchestrator | TASK [Mask ceph mgr systemd unit] ********************************************** 2026-03-26 05:32:44.243216 | orchestrator | Thursday 26 March 2026 05:32:20 +0000 (0:00:02.570) 0:29:44.116 ******** 2026-03-26 05:32:44.243228 | orchestrator | changed: [testbed-node-2] 2026-03-26 05:32:44.243241 | orchestrator | 2026-03-26 05:32:44.243252 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-26 05:32:44.243263 | orchestrator | Thursday 26 March 2026 05:32:22 +0000 (0:00:02.265) 0:29:46.381 ******** 2026-03-26 05:32:44.243274 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-2 2026-03-26 05:32:44.243285 | orchestrator | 2026-03-26 05:32:44.243295 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-26 05:32:44.243329 | orchestrator | Thursday 26 March 2026 05:32:23 +0000 (0:00:01.153) 0:29:47.534 ******** 2026-03-26 05:32:44.243341 | orchestrator | ok: [testbed-node-2] 2026-03-26 05:32:44.243352 | orchestrator | 2026-03-26 05:32:44.243363 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-26 05:32:44.243373 | orchestrator | Thursday 26 March 2026 05:32:25 +0000 (0:00:01.508) 0:29:49.042 ******** 2026-03-26 05:32:44.243384 | orchestrator | ok: [testbed-node-2] 2026-03-26 05:32:44.243395 | orchestrator | 2026-03-26 05:32:44.243406 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-26 05:32:44.243417 | orchestrator | Thursday 26 March 2026 05:32:26 +0000 (0:00:01.102) 0:29:50.145 ******** 2026-03-26 05:32:44.243428 | orchestrator | ok: [testbed-node-2] 2026-03-26 05:32:44.243438 | orchestrator | 2026-03-26 05:32:44.243449 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-26 05:32:44.243476 | orchestrator | Thursday 26 March 2026 05:32:27 +0000 (0:00:01.484) 0:29:51.630 ******** 2026-03-26 05:32:44.243488 | orchestrator | ok: [testbed-node-2] 2026-03-26 05:32:44.243499 | orchestrator | 2026-03-26 05:32:44.243510 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-26 05:32:44.243521 | orchestrator | Thursday 26 March 2026 05:32:29 +0000 (0:00:01.167) 0:29:52.798 ******** 2026-03-26 05:32:44.243532 | orchestrator | ok: [testbed-node-2] 2026-03-26 05:32:44.243542 | orchestrator | 2026-03-26 05:32:44.243553 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-26 05:32:44.243564 | orchestrator | Thursday 26 March 2026 05:32:30 +0000 (0:00:01.131) 0:29:53.930 ******** 2026-03-26 05:32:44.243574 | orchestrator | ok: [testbed-node-2] 2026-03-26 05:32:44.243585 | orchestrator | 2026-03-26 05:32:44.243596 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-26 05:32:44.243607 | orchestrator | Thursday 26 March 2026 05:32:31 +0000 (0:00:01.195) 0:29:55.125 ******** 2026-03-26 05:32:44.243618 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:32:44.243628 | orchestrator | 2026-03-26 05:32:44.243639 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-26 05:32:44.243650 | orchestrator | Thursday 26 March 2026 05:32:32 +0000 (0:00:01.131) 0:29:56.256 ******** 2026-03-26 05:32:44.243660 | orchestrator | ok: [testbed-node-2] 2026-03-26 05:32:44.243671 | orchestrator | 2026-03-26 05:32:44.243682 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-26 05:32:44.243692 | orchestrator | Thursday 26 March 2026 05:32:33 +0000 (0:00:01.164) 0:29:57.421 ******** 2026-03-26 05:32:44.243710 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-26 05:32:44.243721 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-26 05:32:44.243732 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-26 05:32:44.243743 | orchestrator | 2026-03-26 05:32:44.243754 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-26 05:32:44.243764 | orchestrator | Thursday 26 March 2026 05:32:35 +0000 (0:00:01.704) 0:29:59.126 ******** 2026-03-26 05:32:44.243775 | orchestrator | ok: [testbed-node-2] 2026-03-26 05:32:44.243785 | orchestrator | 2026-03-26 05:32:44.243801 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-26 05:32:44.243812 | orchestrator | Thursday 26 March 2026 05:32:36 +0000 (0:00:01.260) 0:30:00.386 ******** 2026-03-26 05:32:44.243823 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-26 05:32:44.243834 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-26 05:32:44.243844 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-26 05:32:44.243855 | orchestrator | 2026-03-26 05:32:44.243866 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-26 05:32:44.243876 | orchestrator | Thursday 26 March 2026 05:32:39 +0000 (0:00:02.884) 0:30:03.271 ******** 2026-03-26 05:32:44.243887 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-26 05:32:44.243898 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-26 05:32:44.243909 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-26 05:32:44.243919 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:32:44.243930 | orchestrator | 2026-03-26 05:32:44.243941 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-26 05:32:44.243951 | orchestrator | Thursday 26 March 2026 05:32:41 +0000 (0:00:01.399) 0:30:04.671 ******** 2026-03-26 05:32:44.243964 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-26 05:32:44.243978 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-26 05:32:44.243989 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-26 05:32:44.244000 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:32:44.244011 | orchestrator | 2026-03-26 05:32:44.244022 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-26 05:32:44.244032 | orchestrator | Thursday 26 March 2026 05:32:43 +0000 (0:00:01.993) 0:30:06.665 ******** 2026-03-26 05:32:44.244046 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-26 05:32:44.244068 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-26 05:33:04.399940 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-26 05:33:04.400061 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:33:04.400079 | orchestrator | 2026-03-26 05:33:04.400092 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-26 05:33:04.400104 | orchestrator | Thursday 26 March 2026 05:32:44 +0000 (0:00:01.221) 0:30:07.887 ******** 2026-03-26 05:33:04.400118 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': 'de9c3b4c4c57', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-26 05:32:37.251347', 'end': '2026-03-26 05:32:37.306087', 'delta': '0:00:00.054740', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['de9c3b4c4c57'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-26 05:33:04.400150 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': 'd66b87272f8e', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-26 05:32:37.856920', 'end': '2026-03-26 05:32:37.897986', 'delta': '0:00:00.041066', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['d66b87272f8e'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-26 05:33:04.400163 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': 'b850f8fd4697', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-26 05:32:38.441829', 'end': '2026-03-26 05:32:38.485013', 'delta': '0:00:00.043184', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['b850f8fd4697'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-26 05:33:04.400175 | orchestrator | 2026-03-26 05:33:04.400187 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-26 05:33:04.400198 | orchestrator | Thursday 26 March 2026 05:32:45 +0000 (0:00:01.227) 0:30:09.114 ******** 2026-03-26 05:33:04.400210 | orchestrator | ok: [testbed-node-2] 2026-03-26 05:33:04.400221 | orchestrator | 2026-03-26 05:33:04.400232 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-26 05:33:04.400270 | orchestrator | Thursday 26 March 2026 05:32:46 +0000 (0:00:01.275) 0:30:10.389 ******** 2026-03-26 05:33:04.400281 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:33:04.400292 | orchestrator | 2026-03-26 05:33:04.400303 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-26 05:33:04.400314 | orchestrator | Thursday 26 March 2026 05:32:48 +0000 (0:00:01.670) 0:30:12.060 ******** 2026-03-26 05:33:04.400324 | orchestrator | ok: [testbed-node-2] 2026-03-26 05:33:04.400357 | orchestrator | 2026-03-26 05:33:04.400369 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-26 05:33:04.400380 | orchestrator | Thursday 26 March 2026 05:32:49 +0000 (0:00:01.143) 0:30:13.204 ******** 2026-03-26 05:33:04.400391 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-03-26 05:33:04.400402 | orchestrator | 2026-03-26 05:33:04.400413 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-26 05:33:04.400424 | orchestrator | Thursday 26 March 2026 05:32:51 +0000 (0:00:01.949) 0:30:15.154 ******** 2026-03-26 05:33:04.400434 | orchestrator | ok: [testbed-node-2] 2026-03-26 05:33:04.400445 | orchestrator | 2026-03-26 05:33:04.400456 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-26 05:33:04.400467 | orchestrator | Thursday 26 March 2026 05:32:52 +0000 (0:00:01.128) 0:30:16.282 ******** 2026-03-26 05:33:04.400496 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:33:04.400509 | orchestrator | 2026-03-26 05:33:04.400522 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-26 05:33:04.400535 | orchestrator | Thursday 26 March 2026 05:32:53 +0000 (0:00:01.152) 0:30:17.435 ******** 2026-03-26 05:33:04.400547 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:33:04.400560 | orchestrator | 2026-03-26 05:33:04.400572 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-26 05:33:04.400584 | orchestrator | Thursday 26 March 2026 05:32:55 +0000 (0:00:01.237) 0:30:18.673 ******** 2026-03-26 05:33:04.400597 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:33:04.400609 | orchestrator | 2026-03-26 05:33:04.400622 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-26 05:33:04.400635 | orchestrator | Thursday 26 March 2026 05:32:56 +0000 (0:00:01.163) 0:30:19.836 ******** 2026-03-26 05:33:04.400647 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:33:04.400658 | orchestrator | 2026-03-26 05:33:04.400668 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-26 05:33:04.400679 | orchestrator | Thursday 26 March 2026 05:32:57 +0000 (0:00:01.154) 0:30:20.991 ******** 2026-03-26 05:33:04.400689 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:33:04.400700 | orchestrator | 2026-03-26 05:33:04.400711 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-26 05:33:04.400722 | orchestrator | Thursday 26 March 2026 05:32:58 +0000 (0:00:01.156) 0:30:22.148 ******** 2026-03-26 05:33:04.400732 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:33:04.400743 | orchestrator | 2026-03-26 05:33:04.400753 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-26 05:33:04.400764 | orchestrator | Thursday 26 March 2026 05:32:59 +0000 (0:00:01.121) 0:30:23.270 ******** 2026-03-26 05:33:04.400775 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:33:04.400786 | orchestrator | 2026-03-26 05:33:04.400796 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-26 05:33:04.400807 | orchestrator | Thursday 26 March 2026 05:33:00 +0000 (0:00:01.146) 0:30:24.416 ******** 2026-03-26 05:33:04.400824 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:33:04.400835 | orchestrator | 2026-03-26 05:33:04.400846 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-26 05:33:04.400858 | orchestrator | Thursday 26 March 2026 05:33:01 +0000 (0:00:01.159) 0:30:25.576 ******** 2026-03-26 05:33:04.400868 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:33:04.400879 | orchestrator | 2026-03-26 05:33:04.400890 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-26 05:33:04.400900 | orchestrator | Thursday 26 March 2026 05:33:03 +0000 (0:00:01.175) 0:30:26.752 ******** 2026-03-26 05:33:04.400912 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:33:04.400933 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:33:04.400944 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:33:04.400956 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-26-01-38-10-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-26 05:33:04.400968 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:33:04.400988 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:33:05.675221 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:33:05.675407 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7634648a-b5a4-45bc-ac0b-8484a2642b22', 'scsi-SQEMU_QEMU_HARDDISK_7634648a-b5a4-45bc-ac0b-8484a2642b22'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '7634648a', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7634648a-b5a4-45bc-ac0b-8484a2642b22-part16', 'scsi-SQEMU_QEMU_HARDDISK_7634648a-b5a4-45bc-ac0b-8484a2642b22-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7634648a-b5a4-45bc-ac0b-8484a2642b22-part14', 'scsi-SQEMU_QEMU_HARDDISK_7634648a-b5a4-45bc-ac0b-8484a2642b22-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7634648a-b5a4-45bc-ac0b-8484a2642b22-part15', 'scsi-SQEMU_QEMU_HARDDISK_7634648a-b5a4-45bc-ac0b-8484a2642b22-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7634648a-b5a4-45bc-ac0b-8484a2642b22-part1', 'scsi-SQEMU_QEMU_HARDDISK_7634648a-b5a4-45bc-ac0b-8484a2642b22-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-26 05:33:05.675454 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:33:05.675468 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:33:05.675480 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:33:05.675493 | orchestrator | 2026-03-26 05:33:05.675505 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-26 05:33:05.675518 | orchestrator | Thursday 26 March 2026 05:33:04 +0000 (0:00:01.292) 0:30:28.045 ******** 2026-03-26 05:33:05.675549 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:33:05.675564 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:33:05.675581 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:33:05.675602 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-26-01-38-10-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:33:05.675614 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:33:05.675625 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:33:05.675636 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:33:05.675665 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7634648a-b5a4-45bc-ac0b-8484a2642b22', 'scsi-SQEMU_QEMU_HARDDISK_7634648a-b5a4-45bc-ac0b-8484a2642b22'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '7634648a', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7634648a-b5a4-45bc-ac0b-8484a2642b22-part16', 'scsi-SQEMU_QEMU_HARDDISK_7634648a-b5a4-45bc-ac0b-8484a2642b22-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7634648a-b5a4-45bc-ac0b-8484a2642b22-part14', 'scsi-SQEMU_QEMU_HARDDISK_7634648a-b5a4-45bc-ac0b-8484a2642b22-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7634648a-b5a4-45bc-ac0b-8484a2642b22-part15', 'scsi-SQEMU_QEMU_HARDDISK_7634648a-b5a4-45bc-ac0b-8484a2642b22-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7634648a-b5a4-45bc-ac0b-8484a2642b22-part1', 'scsi-SQEMU_QEMU_HARDDISK_7634648a-b5a4-45bc-ac0b-8484a2642b22-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:33:40.678493 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:33:40.678614 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:33:40.678632 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:33:40.678647 | orchestrator | 2026-03-26 05:33:40.678660 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-26 05:33:40.678672 | orchestrator | Thursday 26 March 2026 05:33:05 +0000 (0:00:01.278) 0:30:29.323 ******** 2026-03-26 05:33:40.678683 | orchestrator | ok: [testbed-node-2] 2026-03-26 05:33:40.678695 | orchestrator | 2026-03-26 05:33:40.678706 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-26 05:33:40.678717 | orchestrator | Thursday 26 March 2026 05:33:07 +0000 (0:00:01.481) 0:30:30.804 ******** 2026-03-26 05:33:40.678727 | orchestrator | ok: [testbed-node-2] 2026-03-26 05:33:40.678738 | orchestrator | 2026-03-26 05:33:40.678749 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-26 05:33:40.678760 | orchestrator | Thursday 26 March 2026 05:33:08 +0000 (0:00:01.158) 0:30:31.962 ******** 2026-03-26 05:33:40.678771 | orchestrator | ok: [testbed-node-2] 2026-03-26 05:33:40.678781 | orchestrator | 2026-03-26 05:33:40.678792 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-26 05:33:40.678803 | orchestrator | Thursday 26 March 2026 05:33:09 +0000 (0:00:01.491) 0:30:33.454 ******** 2026-03-26 05:33:40.678814 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:33:40.678825 | orchestrator | 2026-03-26 05:33:40.678835 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-26 05:33:40.678846 | orchestrator | Thursday 26 March 2026 05:33:10 +0000 (0:00:01.157) 0:30:34.612 ******** 2026-03-26 05:33:40.678857 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:33:40.678868 | orchestrator | 2026-03-26 05:33:40.678901 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-26 05:33:40.678913 | orchestrator | Thursday 26 March 2026 05:33:12 +0000 (0:00:01.334) 0:30:35.947 ******** 2026-03-26 05:33:40.678923 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:33:40.678934 | orchestrator | 2026-03-26 05:33:40.678945 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-26 05:33:40.678955 | orchestrator | Thursday 26 March 2026 05:33:13 +0000 (0:00:01.185) 0:30:37.132 ******** 2026-03-26 05:33:40.678966 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-03-26 05:33:40.678977 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-03-26 05:33:40.678988 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-26 05:33:40.678998 | orchestrator | 2026-03-26 05:33:40.679024 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-26 05:33:40.679039 | orchestrator | Thursday 26 March 2026 05:33:15 +0000 (0:00:01.723) 0:30:38.856 ******** 2026-03-26 05:33:40.679051 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-26 05:33:40.679064 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-26 05:33:40.679076 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-26 05:33:40.679088 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:33:40.679100 | orchestrator | 2026-03-26 05:33:40.679112 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-26 05:33:40.679151 | orchestrator | Thursday 26 March 2026 05:33:16 +0000 (0:00:01.165) 0:30:40.022 ******** 2026-03-26 05:33:40.679163 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:33:40.679176 | orchestrator | 2026-03-26 05:33:40.679187 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-26 05:33:40.679200 | orchestrator | Thursday 26 March 2026 05:33:17 +0000 (0:00:01.127) 0:30:41.150 ******** 2026-03-26 05:33:40.679212 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-26 05:33:40.679224 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-26 05:33:40.679237 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-26 05:33:40.679249 | orchestrator | ok: [testbed-node-2 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-26 05:33:40.679261 | orchestrator | ok: [testbed-node-2 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-26 05:33:40.679274 | orchestrator | ok: [testbed-node-2 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-26 05:33:40.679303 | orchestrator | ok: [testbed-node-2 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-26 05:33:40.679317 | orchestrator | 2026-03-26 05:33:40.679330 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-26 05:33:40.679342 | orchestrator | Thursday 26 March 2026 05:33:19 +0000 (0:00:02.171) 0:30:43.322 ******** 2026-03-26 05:33:40.679355 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-26 05:33:40.679367 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-26 05:33:40.679379 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-26 05:33:40.679390 | orchestrator | ok: [testbed-node-2 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-26 05:33:40.679401 | orchestrator | ok: [testbed-node-2 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-26 05:33:40.679412 | orchestrator | ok: [testbed-node-2 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-26 05:33:40.679422 | orchestrator | ok: [testbed-node-2 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-26 05:33:40.679433 | orchestrator | 2026-03-26 05:33:40.679444 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-26 05:33:40.679454 | orchestrator | Thursday 26 March 2026 05:33:21 +0000 (0:00:02.279) 0:30:45.601 ******** 2026-03-26 05:33:40.679465 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-2 2026-03-26 05:33:40.679485 | orchestrator | 2026-03-26 05:33:40.679496 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-26 05:33:40.679507 | orchestrator | Thursday 26 March 2026 05:33:23 +0000 (0:00:01.268) 0:30:46.870 ******** 2026-03-26 05:33:40.679518 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-2 2026-03-26 05:33:40.679529 | orchestrator | 2026-03-26 05:33:40.679540 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-26 05:33:40.679551 | orchestrator | Thursday 26 March 2026 05:33:24 +0000 (0:00:01.121) 0:30:47.991 ******** 2026-03-26 05:33:40.679561 | orchestrator | ok: [testbed-node-2] 2026-03-26 05:33:40.679572 | orchestrator | 2026-03-26 05:33:40.679583 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-26 05:33:40.679593 | orchestrator | Thursday 26 March 2026 05:33:25 +0000 (0:00:01.538) 0:30:49.530 ******** 2026-03-26 05:33:40.679604 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:33:40.679615 | orchestrator | 2026-03-26 05:33:40.679625 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-26 05:33:40.679636 | orchestrator | Thursday 26 March 2026 05:33:27 +0000 (0:00:01.136) 0:30:50.667 ******** 2026-03-26 05:33:40.679647 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:33:40.679658 | orchestrator | 2026-03-26 05:33:40.679668 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-26 05:33:40.679679 | orchestrator | Thursday 26 March 2026 05:33:28 +0000 (0:00:01.098) 0:30:51.765 ******** 2026-03-26 05:33:40.679690 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:33:40.679700 | orchestrator | 2026-03-26 05:33:40.679711 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-26 05:33:40.679722 | orchestrator | Thursday 26 March 2026 05:33:29 +0000 (0:00:01.166) 0:30:52.932 ******** 2026-03-26 05:33:40.679733 | orchestrator | ok: [testbed-node-2] 2026-03-26 05:33:40.679743 | orchestrator | 2026-03-26 05:33:40.679754 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-26 05:33:40.679765 | orchestrator | Thursday 26 March 2026 05:33:30 +0000 (0:00:01.577) 0:30:54.509 ******** 2026-03-26 05:33:40.679775 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:33:40.679786 | orchestrator | 2026-03-26 05:33:40.679797 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-26 05:33:40.679808 | orchestrator | Thursday 26 March 2026 05:33:32 +0000 (0:00:01.202) 0:30:55.712 ******** 2026-03-26 05:33:40.679818 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:33:40.679829 | orchestrator | 2026-03-26 05:33:40.679840 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-26 05:33:40.679856 | orchestrator | Thursday 26 March 2026 05:33:33 +0000 (0:00:01.162) 0:30:56.875 ******** 2026-03-26 05:33:40.679866 | orchestrator | ok: [testbed-node-2] 2026-03-26 05:33:40.679877 | orchestrator | 2026-03-26 05:33:40.679888 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-26 05:33:40.679899 | orchestrator | Thursday 26 March 2026 05:33:34 +0000 (0:00:01.605) 0:30:58.480 ******** 2026-03-26 05:33:40.679909 | orchestrator | ok: [testbed-node-2] 2026-03-26 05:33:40.679920 | orchestrator | 2026-03-26 05:33:40.679931 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-26 05:33:40.679941 | orchestrator | Thursday 26 March 2026 05:33:36 +0000 (0:00:01.614) 0:31:00.095 ******** 2026-03-26 05:33:40.679952 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:33:40.679962 | orchestrator | 2026-03-26 05:33:40.679973 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-26 05:33:40.679984 | orchestrator | Thursday 26 March 2026 05:33:37 +0000 (0:00:00.980) 0:31:01.076 ******** 2026-03-26 05:33:40.679994 | orchestrator | ok: [testbed-node-2] 2026-03-26 05:33:40.680005 | orchestrator | 2026-03-26 05:33:40.680016 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-26 05:33:40.680026 | orchestrator | Thursday 26 March 2026 05:33:38 +0000 (0:00:00.858) 0:31:01.934 ******** 2026-03-26 05:33:40.680043 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:33:40.680054 | orchestrator | 2026-03-26 05:33:40.680064 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-26 05:33:40.680075 | orchestrator | Thursday 26 March 2026 05:33:39 +0000 (0:00:00.773) 0:31:02.707 ******** 2026-03-26 05:33:40.680086 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:33:40.680096 | orchestrator | 2026-03-26 05:33:40.680107 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-26 05:33:40.680147 | orchestrator | Thursday 26 March 2026 05:33:39 +0000 (0:00:00.810) 0:31:03.518 ******** 2026-03-26 05:33:40.680166 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:34:23.033514 | orchestrator | 2026-03-26 05:34:23.033618 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-26 05:34:23.033633 | orchestrator | Thursday 26 March 2026 05:33:40 +0000 (0:00:00.804) 0:31:04.322 ******** 2026-03-26 05:34:23.033643 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:34:23.033654 | orchestrator | 2026-03-26 05:34:23.033663 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-26 05:34:23.033672 | orchestrator | Thursday 26 March 2026 05:33:41 +0000 (0:00:00.860) 0:31:05.183 ******** 2026-03-26 05:34:23.033681 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:34:23.033689 | orchestrator | 2026-03-26 05:34:23.033698 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-26 05:34:23.033707 | orchestrator | Thursday 26 March 2026 05:33:42 +0000 (0:00:00.803) 0:31:05.986 ******** 2026-03-26 05:34:23.033716 | orchestrator | ok: [testbed-node-2] 2026-03-26 05:34:23.033725 | orchestrator | 2026-03-26 05:34:23.033734 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-26 05:34:23.033743 | orchestrator | Thursday 26 March 2026 05:33:43 +0000 (0:00:00.824) 0:31:06.811 ******** 2026-03-26 05:34:23.033751 | orchestrator | ok: [testbed-node-2] 2026-03-26 05:34:23.033759 | orchestrator | 2026-03-26 05:34:23.033768 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-26 05:34:23.033777 | orchestrator | Thursday 26 March 2026 05:33:44 +0000 (0:00:00.851) 0:31:07.662 ******** 2026-03-26 05:34:23.033785 | orchestrator | ok: [testbed-node-2] 2026-03-26 05:34:23.033794 | orchestrator | 2026-03-26 05:34:23.033802 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-03-26 05:34:23.033811 | orchestrator | Thursday 26 March 2026 05:33:44 +0000 (0:00:00.886) 0:31:08.549 ******** 2026-03-26 05:34:23.033820 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:34:23.033828 | orchestrator | 2026-03-26 05:34:23.033836 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-03-26 05:34:23.033845 | orchestrator | Thursday 26 March 2026 05:33:45 +0000 (0:00:00.794) 0:31:09.343 ******** 2026-03-26 05:34:23.033853 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:34:23.033862 | orchestrator | 2026-03-26 05:34:23.033870 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-03-26 05:34:23.033879 | orchestrator | Thursday 26 March 2026 05:33:46 +0000 (0:00:00.869) 0:31:10.213 ******** 2026-03-26 05:34:23.033888 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:34:23.033896 | orchestrator | 2026-03-26 05:34:23.033905 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-03-26 05:34:23.033913 | orchestrator | Thursday 26 March 2026 05:33:47 +0000 (0:00:01.025) 0:31:11.239 ******** 2026-03-26 05:34:23.033922 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:34:23.033930 | orchestrator | 2026-03-26 05:34:23.033939 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-03-26 05:34:23.033947 | orchestrator | Thursday 26 March 2026 05:33:48 +0000 (0:00:00.798) 0:31:12.038 ******** 2026-03-26 05:34:23.033956 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:34:23.033964 | orchestrator | 2026-03-26 05:34:23.033973 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-03-26 05:34:23.034074 | orchestrator | Thursday 26 March 2026 05:33:49 +0000 (0:00:00.777) 0:31:12.816 ******** 2026-03-26 05:34:23.034088 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:34:23.034098 | orchestrator | 2026-03-26 05:34:23.034109 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-03-26 05:34:23.034118 | orchestrator | Thursday 26 March 2026 05:33:49 +0000 (0:00:00.792) 0:31:13.608 ******** 2026-03-26 05:34:23.034128 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:34:23.034138 | orchestrator | 2026-03-26 05:34:23.034148 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-03-26 05:34:23.034159 | orchestrator | Thursday 26 March 2026 05:33:50 +0000 (0:00:00.835) 0:31:14.443 ******** 2026-03-26 05:34:23.034169 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:34:23.034179 | orchestrator | 2026-03-26 05:34:23.034198 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-03-26 05:34:23.034208 | orchestrator | Thursday 26 March 2026 05:33:51 +0000 (0:00:00.774) 0:31:15.218 ******** 2026-03-26 05:34:23.034218 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:34:23.034229 | orchestrator | 2026-03-26 05:34:23.034252 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-03-26 05:34:23.034263 | orchestrator | Thursday 26 March 2026 05:33:52 +0000 (0:00:00.815) 0:31:16.034 ******** 2026-03-26 05:34:23.034273 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:34:23.034283 | orchestrator | 2026-03-26 05:34:23.034293 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-03-26 05:34:23.034303 | orchestrator | Thursday 26 March 2026 05:33:53 +0000 (0:00:00.779) 0:31:16.813 ******** 2026-03-26 05:34:23.034313 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:34:23.034322 | orchestrator | 2026-03-26 05:34:23.034332 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-03-26 05:34:23.034342 | orchestrator | Thursday 26 March 2026 05:33:53 +0000 (0:00:00.810) 0:31:17.623 ******** 2026-03-26 05:34:23.034352 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:34:23.034362 | orchestrator | 2026-03-26 05:34:23.034371 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-26 05:34:23.034381 | orchestrator | Thursday 26 March 2026 05:33:54 +0000 (0:00:00.791) 0:31:18.415 ******** 2026-03-26 05:34:23.034392 | orchestrator | ok: [testbed-node-2] 2026-03-26 05:34:23.034402 | orchestrator | 2026-03-26 05:34:23.034411 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-26 05:34:23.034420 | orchestrator | Thursday 26 March 2026 05:33:56 +0000 (0:00:01.710) 0:31:20.125 ******** 2026-03-26 05:34:23.034429 | orchestrator | ok: [testbed-node-2] 2026-03-26 05:34:23.034437 | orchestrator | 2026-03-26 05:34:23.034446 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-26 05:34:23.034454 | orchestrator | Thursday 26 March 2026 05:33:58 +0000 (0:00:02.181) 0:31:22.307 ******** 2026-03-26 05:34:23.034463 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-2 2026-03-26 05:34:23.034472 | orchestrator | 2026-03-26 05:34:23.034497 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-26 05:34:23.034506 | orchestrator | Thursday 26 March 2026 05:34:00 +0000 (0:00:01.450) 0:31:23.758 ******** 2026-03-26 05:34:23.034515 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:34:23.034524 | orchestrator | 2026-03-26 05:34:23.034532 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-26 05:34:23.034541 | orchestrator | Thursday 26 March 2026 05:34:01 +0000 (0:00:01.168) 0:31:24.926 ******** 2026-03-26 05:34:23.034549 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:34:23.034558 | orchestrator | 2026-03-26 05:34:23.034567 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-26 05:34:23.034575 | orchestrator | Thursday 26 March 2026 05:34:02 +0000 (0:00:01.150) 0:31:26.077 ******** 2026-03-26 05:34:23.034584 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-26 05:34:23.034593 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-26 05:34:23.034609 | orchestrator | 2026-03-26 05:34:23.034618 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-26 05:34:23.034626 | orchestrator | Thursday 26 March 2026 05:34:04 +0000 (0:00:01.875) 0:31:27.952 ******** 2026-03-26 05:34:23.034635 | orchestrator | ok: [testbed-node-2] 2026-03-26 05:34:23.034644 | orchestrator | 2026-03-26 05:34:23.034652 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-26 05:34:23.034661 | orchestrator | Thursday 26 March 2026 05:34:05 +0000 (0:00:01.473) 0:31:29.426 ******** 2026-03-26 05:34:23.034670 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:34:23.034678 | orchestrator | 2026-03-26 05:34:23.034687 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-26 05:34:23.034696 | orchestrator | Thursday 26 March 2026 05:34:06 +0000 (0:00:01.203) 0:31:30.629 ******** 2026-03-26 05:34:23.034704 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:34:23.034713 | orchestrator | 2026-03-26 05:34:23.034722 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-26 05:34:23.034730 | orchestrator | Thursday 26 March 2026 05:34:07 +0000 (0:00:00.788) 0:31:31.417 ******** 2026-03-26 05:34:23.034739 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:34:23.034747 | orchestrator | 2026-03-26 05:34:23.034756 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-26 05:34:23.034764 | orchestrator | Thursday 26 March 2026 05:34:08 +0000 (0:00:00.772) 0:31:32.190 ******** 2026-03-26 05:34:23.034773 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-2 2026-03-26 05:34:23.034781 | orchestrator | 2026-03-26 05:34:23.034790 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-26 05:34:23.034799 | orchestrator | Thursday 26 March 2026 05:34:09 +0000 (0:00:01.122) 0:31:33.313 ******** 2026-03-26 05:34:23.034808 | orchestrator | ok: [testbed-node-2] 2026-03-26 05:34:23.034816 | orchestrator | 2026-03-26 05:34:23.034825 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-26 05:34:23.034833 | orchestrator | Thursday 26 March 2026 05:34:11 +0000 (0:00:01.927) 0:31:35.241 ******** 2026-03-26 05:34:23.034842 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-26 05:34:23.034851 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-26 05:34:23.034860 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-26 05:34:23.034868 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:34:23.034877 | orchestrator | 2026-03-26 05:34:23.034885 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-26 05:34:23.034894 | orchestrator | Thursday 26 March 2026 05:34:12 +0000 (0:00:01.240) 0:31:36.481 ******** 2026-03-26 05:34:23.034903 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:34:23.034911 | orchestrator | 2026-03-26 05:34:23.034920 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-26 05:34:23.034929 | orchestrator | Thursday 26 March 2026 05:34:14 +0000 (0:00:01.219) 0:31:37.700 ******** 2026-03-26 05:34:23.034937 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:34:23.034946 | orchestrator | 2026-03-26 05:34:23.034959 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-26 05:34:23.034968 | orchestrator | Thursday 26 March 2026 05:34:15 +0000 (0:00:01.326) 0:31:39.027 ******** 2026-03-26 05:34:23.034977 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:34:23.034985 | orchestrator | 2026-03-26 05:34:23.035012 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-26 05:34:23.035021 | orchestrator | Thursday 26 March 2026 05:34:16 +0000 (0:00:01.216) 0:31:40.244 ******** 2026-03-26 05:34:23.035029 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:34:23.035038 | orchestrator | 2026-03-26 05:34:23.035046 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-26 05:34:23.035061 | orchestrator | Thursday 26 March 2026 05:34:17 +0000 (0:00:01.305) 0:31:41.550 ******** 2026-03-26 05:34:23.035069 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:34:23.035078 | orchestrator | 2026-03-26 05:34:23.035087 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-26 05:34:23.035095 | orchestrator | Thursday 26 March 2026 05:34:18 +0000 (0:00:00.847) 0:31:42.398 ******** 2026-03-26 05:34:23.035104 | orchestrator | ok: [testbed-node-2] 2026-03-26 05:34:23.035112 | orchestrator | 2026-03-26 05:34:23.035121 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-26 05:34:23.035129 | orchestrator | Thursday 26 March 2026 05:34:21 +0000 (0:00:02.272) 0:31:44.670 ******** 2026-03-26 05:34:23.035138 | orchestrator | ok: [testbed-node-2] 2026-03-26 05:34:23.035146 | orchestrator | 2026-03-26 05:34:23.035155 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-26 05:34:23.035163 | orchestrator | Thursday 26 March 2026 05:34:21 +0000 (0:00:00.828) 0:31:45.498 ******** 2026-03-26 05:34:23.035172 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-2 2026-03-26 05:34:23.035180 | orchestrator | 2026-03-26 05:34:23.035195 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-26 05:34:59.995270 | orchestrator | Thursday 26 March 2026 05:34:23 +0000 (0:00:01.177) 0:31:46.676 ******** 2026-03-26 05:34:59.995384 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:34:59.995400 | orchestrator | 2026-03-26 05:34:59.995413 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-26 05:34:59.995424 | orchestrator | Thursday 26 March 2026 05:34:24 +0000 (0:00:01.196) 0:31:47.873 ******** 2026-03-26 05:34:59.995435 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:34:59.995446 | orchestrator | 2026-03-26 05:34:59.995456 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-26 05:34:59.995467 | orchestrator | Thursday 26 March 2026 05:34:25 +0000 (0:00:01.171) 0:31:49.044 ******** 2026-03-26 05:34:59.995478 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:34:59.995488 | orchestrator | 2026-03-26 05:34:59.995499 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-26 05:34:59.995510 | orchestrator | Thursday 26 March 2026 05:34:26 +0000 (0:00:01.132) 0:31:50.177 ******** 2026-03-26 05:34:59.995521 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:34:59.995531 | orchestrator | 2026-03-26 05:34:59.995542 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-26 05:34:59.995552 | orchestrator | Thursday 26 March 2026 05:34:27 +0000 (0:00:01.159) 0:31:51.336 ******** 2026-03-26 05:34:59.995563 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:34:59.995574 | orchestrator | 2026-03-26 05:34:59.995584 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-26 05:34:59.995595 | orchestrator | Thursday 26 March 2026 05:34:28 +0000 (0:00:01.211) 0:31:52.548 ******** 2026-03-26 05:34:59.995606 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:34:59.995617 | orchestrator | 2026-03-26 05:34:59.995628 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-26 05:34:59.995638 | orchestrator | Thursday 26 March 2026 05:34:30 +0000 (0:00:01.324) 0:31:53.873 ******** 2026-03-26 05:34:59.995649 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:34:59.995659 | orchestrator | 2026-03-26 05:34:59.995670 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-26 05:34:59.995681 | orchestrator | Thursday 26 March 2026 05:34:31 +0000 (0:00:01.190) 0:31:55.063 ******** 2026-03-26 05:34:59.995691 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:34:59.995702 | orchestrator | 2026-03-26 05:34:59.995713 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-26 05:34:59.995724 | orchestrator | Thursday 26 March 2026 05:34:32 +0000 (0:00:01.148) 0:31:56.212 ******** 2026-03-26 05:34:59.995734 | orchestrator | ok: [testbed-node-2] 2026-03-26 05:34:59.995746 | orchestrator | 2026-03-26 05:34:59.995779 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-26 05:34:59.995790 | orchestrator | Thursday 26 March 2026 05:34:33 +0000 (0:00:00.819) 0:31:57.032 ******** 2026-03-26 05:34:59.995801 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-2 2026-03-26 05:34:59.995814 | orchestrator | 2026-03-26 05:34:59.995827 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-26 05:34:59.995839 | orchestrator | Thursday 26 March 2026 05:34:34 +0000 (0:00:01.127) 0:31:58.159 ******** 2026-03-26 05:34:59.995851 | orchestrator | ok: [testbed-node-2] => (item=/etc/ceph) 2026-03-26 05:34:59.995863 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/) 2026-03-26 05:34:59.995876 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-03-26 05:34:59.995916 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-03-26 05:34:59.995929 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-03-26 05:34:59.995941 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-03-26 05:34:59.995953 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-03-26 05:34:59.995965 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-03-26 05:34:59.995977 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-26 05:34:59.995989 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-26 05:34:59.996016 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-26 05:34:59.996030 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-26 05:34:59.996043 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-26 05:34:59.996056 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-26 05:34:59.996068 | orchestrator | ok: [testbed-node-2] => (item=/var/run/ceph) 2026-03-26 05:34:59.996080 | orchestrator | ok: [testbed-node-2] => (item=/var/log/ceph) 2026-03-26 05:34:59.996092 | orchestrator | 2026-03-26 05:34:59.996104 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-26 05:34:59.996116 | orchestrator | Thursday 26 March 2026 05:34:40 +0000 (0:00:06.437) 0:32:04.596 ******** 2026-03-26 05:34:59.996128 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:34:59.996141 | orchestrator | 2026-03-26 05:34:59.996153 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-26 05:34:59.996166 | orchestrator | Thursday 26 March 2026 05:34:41 +0000 (0:00:00.780) 0:32:05.377 ******** 2026-03-26 05:34:59.996177 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:34:59.996188 | orchestrator | 2026-03-26 05:34:59.996199 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-26 05:34:59.996209 | orchestrator | Thursday 26 March 2026 05:34:42 +0000 (0:00:00.839) 0:32:06.217 ******** 2026-03-26 05:34:59.996220 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:34:59.996231 | orchestrator | 2026-03-26 05:34:59.996241 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-26 05:34:59.996252 | orchestrator | Thursday 26 March 2026 05:34:43 +0000 (0:00:00.808) 0:32:07.025 ******** 2026-03-26 05:34:59.996262 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:34:59.996273 | orchestrator | 2026-03-26 05:34:59.996283 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-26 05:34:59.996311 | orchestrator | Thursday 26 March 2026 05:34:44 +0000 (0:00:00.793) 0:32:07.819 ******** 2026-03-26 05:34:59.996322 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:34:59.996333 | orchestrator | 2026-03-26 05:34:59.996343 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-26 05:34:59.996354 | orchestrator | Thursday 26 March 2026 05:34:44 +0000 (0:00:00.793) 0:32:08.613 ******** 2026-03-26 05:34:59.996365 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:34:59.996375 | orchestrator | 2026-03-26 05:34:59.996386 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-26 05:34:59.996405 | orchestrator | Thursday 26 March 2026 05:34:45 +0000 (0:00:00.823) 0:32:09.436 ******** 2026-03-26 05:34:59.996416 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:34:59.996426 | orchestrator | 2026-03-26 05:34:59.996437 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-26 05:34:59.996448 | orchestrator | Thursday 26 March 2026 05:34:46 +0000 (0:00:00.753) 0:32:10.190 ******** 2026-03-26 05:34:59.996458 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:34:59.996469 | orchestrator | 2026-03-26 05:34:59.996479 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-26 05:34:59.996490 | orchestrator | Thursday 26 March 2026 05:34:47 +0000 (0:00:00.769) 0:32:10.959 ******** 2026-03-26 05:34:59.996500 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:34:59.996511 | orchestrator | 2026-03-26 05:34:59.996522 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-26 05:34:59.996532 | orchestrator | Thursday 26 March 2026 05:34:48 +0000 (0:00:00.786) 0:32:11.745 ******** 2026-03-26 05:34:59.996543 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:34:59.996553 | orchestrator | 2026-03-26 05:34:59.996564 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-26 05:34:59.996574 | orchestrator | Thursday 26 March 2026 05:34:48 +0000 (0:00:00.787) 0:32:12.533 ******** 2026-03-26 05:34:59.996585 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:34:59.996595 | orchestrator | 2026-03-26 05:34:59.996606 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-26 05:34:59.996617 | orchestrator | Thursday 26 March 2026 05:34:49 +0000 (0:00:00.806) 0:32:13.340 ******** 2026-03-26 05:34:59.996628 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:34:59.996638 | orchestrator | 2026-03-26 05:34:59.996649 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-26 05:34:59.996659 | orchestrator | Thursday 26 March 2026 05:34:50 +0000 (0:00:00.770) 0:32:14.110 ******** 2026-03-26 05:34:59.996670 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:34:59.996680 | orchestrator | 2026-03-26 05:34:59.996691 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-26 05:34:59.996701 | orchestrator | Thursday 26 March 2026 05:34:51 +0000 (0:00:00.902) 0:32:15.013 ******** 2026-03-26 05:34:59.996711 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:34:59.996722 | orchestrator | 2026-03-26 05:34:59.996732 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-26 05:34:59.996743 | orchestrator | Thursday 26 March 2026 05:34:52 +0000 (0:00:00.785) 0:32:15.798 ******** 2026-03-26 05:34:59.996753 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:34:59.996764 | orchestrator | 2026-03-26 05:34:59.996774 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-26 05:34:59.996785 | orchestrator | Thursday 26 March 2026 05:34:53 +0000 (0:00:00.930) 0:32:16.729 ******** 2026-03-26 05:34:59.996795 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:34:59.996805 | orchestrator | 2026-03-26 05:34:59.996816 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-26 05:34:59.996826 | orchestrator | Thursday 26 March 2026 05:34:53 +0000 (0:00:00.765) 0:32:17.495 ******** 2026-03-26 05:34:59.996837 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:34:59.996847 | orchestrator | 2026-03-26 05:34:59.996858 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-26 05:34:59.996875 | orchestrator | Thursday 26 March 2026 05:34:54 +0000 (0:00:00.787) 0:32:18.282 ******** 2026-03-26 05:34:59.996906 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:34:59.996918 | orchestrator | 2026-03-26 05:34:59.996928 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-26 05:34:59.996939 | orchestrator | Thursday 26 March 2026 05:34:55 +0000 (0:00:00.814) 0:32:19.097 ******** 2026-03-26 05:34:59.996949 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:34:59.996967 | orchestrator | 2026-03-26 05:34:59.996977 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-26 05:34:59.996988 | orchestrator | Thursday 26 March 2026 05:34:56 +0000 (0:00:00.816) 0:32:19.914 ******** 2026-03-26 05:34:59.996999 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:34:59.997009 | orchestrator | 2026-03-26 05:34:59.997020 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-26 05:34:59.997031 | orchestrator | Thursday 26 March 2026 05:34:57 +0000 (0:00:00.802) 0:32:20.716 ******** 2026-03-26 05:34:59.997041 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:34:59.997052 | orchestrator | 2026-03-26 05:34:59.997062 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-26 05:34:59.997073 | orchestrator | Thursday 26 March 2026 05:34:57 +0000 (0:00:00.778) 0:32:21.495 ******** 2026-03-26 05:34:59.997083 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-03-26 05:34:59.997094 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-03-26 05:34:59.997105 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-03-26 05:34:59.997115 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:34:59.997126 | orchestrator | 2026-03-26 05:34:59.997136 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-26 05:34:59.997146 | orchestrator | Thursday 26 March 2026 05:34:58 +0000 (0:00:01.097) 0:32:22.592 ******** 2026-03-26 05:34:59.997157 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-03-26 05:34:59.997175 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-03-26 05:35:57.365034 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-03-26 05:35:57.365130 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:35:57.365141 | orchestrator | 2026-03-26 05:35:57.365150 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-26 05:35:57.365159 | orchestrator | Thursday 26 March 2026 05:34:59 +0000 (0:00:01.047) 0:32:23.639 ******** 2026-03-26 05:35:57.365166 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-03-26 05:35:57.365174 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-03-26 05:35:57.365181 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-03-26 05:35:57.365189 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:35:57.365196 | orchestrator | 2026-03-26 05:35:57.365203 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-26 05:35:57.365211 | orchestrator | Thursday 26 March 2026 05:35:01 +0000 (0:00:01.127) 0:32:24.767 ******** 2026-03-26 05:35:57.365218 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:35:57.365225 | orchestrator | 2026-03-26 05:35:57.365232 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-26 05:35:57.365240 | orchestrator | Thursday 26 March 2026 05:35:01 +0000 (0:00:00.815) 0:32:25.582 ******** 2026-03-26 05:35:57.365247 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-03-26 05:35:57.365255 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:35:57.365262 | orchestrator | 2026-03-26 05:35:57.365269 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-26 05:35:57.365276 | orchestrator | Thursday 26 March 2026 05:35:02 +0000 (0:00:00.910) 0:32:26.493 ******** 2026-03-26 05:35:57.365283 | orchestrator | ok: [testbed-node-2] 2026-03-26 05:35:57.365291 | orchestrator | 2026-03-26 05:35:57.365298 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-03-26 05:35:57.365305 | orchestrator | Thursday 26 March 2026 05:35:04 +0000 (0:00:01.445) 0:32:27.938 ******** 2026-03-26 05:35:57.365313 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-26 05:35:57.365320 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-26 05:35:57.365328 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-26 05:35:57.365335 | orchestrator | 2026-03-26 05:35:57.365359 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-03-26 05:35:57.365367 | orchestrator | Thursday 26 March 2026 05:35:05 +0000 (0:00:01.653) 0:32:29.592 ******** 2026-03-26 05:35:57.365374 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-2 2026-03-26 05:35:57.365381 | orchestrator | 2026-03-26 05:35:57.365388 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-03-26 05:35:57.365395 | orchestrator | Thursday 26 March 2026 05:35:07 +0000 (0:00:01.119) 0:32:30.712 ******** 2026-03-26 05:35:57.365402 | orchestrator | ok: [testbed-node-2] 2026-03-26 05:35:57.365410 | orchestrator | 2026-03-26 05:35:57.365417 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-03-26 05:35:57.365424 | orchestrator | Thursday 26 March 2026 05:35:08 +0000 (0:00:01.472) 0:32:32.185 ******** 2026-03-26 05:35:57.365431 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:35:57.365438 | orchestrator | 2026-03-26 05:35:57.365445 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-03-26 05:35:57.365452 | orchestrator | Thursday 26 March 2026 05:35:09 +0000 (0:00:01.102) 0:32:33.287 ******** 2026-03-26 05:35:57.365459 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-26 05:35:57.365466 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-26 05:35:57.365473 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-26 05:35:57.365480 | orchestrator | ok: [testbed-node-2 -> {{ groups[mon_group_name][0] }}] 2026-03-26 05:35:57.365487 | orchestrator | 2026-03-26 05:35:57.365494 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-03-26 05:35:57.365514 | orchestrator | Thursday 26 March 2026 05:35:16 +0000 (0:00:07.050) 0:32:40.338 ******** 2026-03-26 05:35:57.365521 | orchestrator | ok: [testbed-node-2] 2026-03-26 05:35:57.365528 | orchestrator | 2026-03-26 05:35:57.365535 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-03-26 05:35:57.365542 | orchestrator | Thursday 26 March 2026 05:35:17 +0000 (0:00:01.191) 0:32:41.530 ******** 2026-03-26 05:35:57.365549 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-26 05:35:57.365556 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-03-26 05:35:57.365563 | orchestrator | 2026-03-26 05:35:57.365570 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-03-26 05:35:57.365577 | orchestrator | Thursday 26 March 2026 05:35:21 +0000 (0:00:03.340) 0:32:44.870 ******** 2026-03-26 05:35:57.365584 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-26 05:35:57.365593 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-03-26 05:35:57.365601 | orchestrator | 2026-03-26 05:35:57.365609 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-03-26 05:35:57.365617 | orchestrator | Thursday 26 March 2026 05:35:23 +0000 (0:00:02.059) 0:32:46.929 ******** 2026-03-26 05:35:57.365626 | orchestrator | ok: [testbed-node-2] 2026-03-26 05:35:57.365634 | orchestrator | 2026-03-26 05:35:57.365641 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-03-26 05:35:57.365649 | orchestrator | Thursday 26 March 2026 05:35:24 +0000 (0:00:01.520) 0:32:48.450 ******** 2026-03-26 05:35:57.365658 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:35:57.365666 | orchestrator | 2026-03-26 05:35:57.365674 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-03-26 05:35:57.365683 | orchestrator | Thursday 26 March 2026 05:35:25 +0000 (0:00:00.803) 0:32:49.253 ******** 2026-03-26 05:35:57.365691 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:35:57.365699 | orchestrator | 2026-03-26 05:35:57.365708 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-03-26 05:35:57.365729 | orchestrator | Thursday 26 March 2026 05:35:26 +0000 (0:00:00.794) 0:32:50.048 ******** 2026-03-26 05:35:57.365758 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-2 2026-03-26 05:35:57.365766 | orchestrator | 2026-03-26 05:35:57.365780 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-03-26 05:35:57.365788 | orchestrator | Thursday 26 March 2026 05:35:27 +0000 (0:00:01.099) 0:32:51.148 ******** 2026-03-26 05:35:57.365797 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:35:57.365805 | orchestrator | 2026-03-26 05:35:57.365813 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-03-26 05:35:57.365821 | orchestrator | Thursday 26 March 2026 05:35:28 +0000 (0:00:01.222) 0:32:52.371 ******** 2026-03-26 05:35:57.365828 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:35:57.365836 | orchestrator | 2026-03-26 05:35:57.365843 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-03-26 05:35:57.365850 | orchestrator | Thursday 26 March 2026 05:35:29 +0000 (0:00:01.136) 0:32:53.507 ******** 2026-03-26 05:35:57.365857 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-2 2026-03-26 05:35:57.365864 | orchestrator | 2026-03-26 05:35:57.365871 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-03-26 05:35:57.365878 | orchestrator | Thursday 26 March 2026 05:35:31 +0000 (0:00:01.243) 0:32:54.751 ******** 2026-03-26 05:35:57.365885 | orchestrator | ok: [testbed-node-2] 2026-03-26 05:35:57.365892 | orchestrator | 2026-03-26 05:35:57.365899 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-03-26 05:35:57.365906 | orchestrator | Thursday 26 March 2026 05:35:33 +0000 (0:00:02.052) 0:32:56.803 ******** 2026-03-26 05:35:57.365913 | orchestrator | ok: [testbed-node-2] 2026-03-26 05:35:57.365921 | orchestrator | 2026-03-26 05:35:57.365928 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-03-26 05:35:57.365935 | orchestrator | Thursday 26 March 2026 05:35:35 +0000 (0:00:01.978) 0:32:58.782 ******** 2026-03-26 05:35:57.365942 | orchestrator | ok: [testbed-node-2] 2026-03-26 05:35:57.365949 | orchestrator | 2026-03-26 05:35:57.365956 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-03-26 05:35:57.365963 | orchestrator | Thursday 26 March 2026 05:35:37 +0000 (0:00:02.506) 0:33:01.289 ******** 2026-03-26 05:35:57.365970 | orchestrator | changed: [testbed-node-2] 2026-03-26 05:35:57.365977 | orchestrator | 2026-03-26 05:35:57.365984 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-03-26 05:35:57.365991 | orchestrator | Thursday 26 March 2026 05:35:41 +0000 (0:00:03.625) 0:33:04.914 ******** 2026-03-26 05:35:57.365999 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-03-26 05:35:57.366006 | orchestrator | 2026-03-26 05:35:57.366057 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-03-26 05:35:57.366065 | orchestrator | Thursday 26 March 2026 05:35:42 +0000 (0:00:01.571) 0:33:06.486 ******** 2026-03-26 05:35:57.366073 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-03-26 05:35:57.366080 | orchestrator | 2026-03-26 05:35:57.366087 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-03-26 05:35:57.366094 | orchestrator | Thursday 26 March 2026 05:35:45 +0000 (0:00:02.483) 0:33:08.970 ******** 2026-03-26 05:35:57.366101 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-03-26 05:35:57.366108 | orchestrator | 2026-03-26 05:35:57.366115 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-03-26 05:35:57.366123 | orchestrator | Thursday 26 March 2026 05:35:47 +0000 (0:00:02.344) 0:33:11.314 ******** 2026-03-26 05:35:57.366130 | orchestrator | ok: [testbed-node-2] 2026-03-26 05:35:57.366137 | orchestrator | 2026-03-26 05:35:57.366144 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-03-26 05:35:57.366151 | orchestrator | Thursday 26 March 2026 05:35:48 +0000 (0:00:01.318) 0:33:12.632 ******** 2026-03-26 05:35:57.366158 | orchestrator | ok: [testbed-node-2] 2026-03-26 05:35:57.366165 | orchestrator | 2026-03-26 05:35:57.366177 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-03-26 05:35:57.366184 | orchestrator | Thursday 26 March 2026 05:35:50 +0000 (0:00:01.156) 0:33:13.789 ******** 2026-03-26 05:35:57.366197 | orchestrator | skipping: [testbed-node-2] => (item=dashboard)  2026-03-26 05:35:57.366204 | orchestrator | skipping: [testbed-node-2] => (item=prometheus)  2026-03-26 05:35:57.366211 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:35:57.366218 | orchestrator | 2026-03-26 05:35:57.366225 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-03-26 05:35:57.366232 | orchestrator | Thursday 26 March 2026 05:35:51 +0000 (0:00:01.713) 0:33:15.502 ******** 2026-03-26 05:35:57.366240 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-03-26 05:35:57.366247 | orchestrator | skipping: [testbed-node-2] => (item=dashboard)  2026-03-26 05:35:57.366254 | orchestrator | skipping: [testbed-node-2] => (item=prometheus)  2026-03-26 05:35:57.366261 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-03-26 05:35:57.366268 | orchestrator | skipping: [testbed-node-2] 2026-03-26 05:35:57.366275 | orchestrator | 2026-03-26 05:35:57.366282 | orchestrator | PLAY [Set osd flags] *********************************************************** 2026-03-26 05:35:57.366289 | orchestrator | 2026-03-26 05:35:57.366297 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-26 05:35:57.366304 | orchestrator | Thursday 26 March 2026 05:35:53 +0000 (0:00:01.983) 0:33:17.486 ******** 2026-03-26 05:35:57.366311 | orchestrator | ok: [testbed-node-3] 2026-03-26 05:35:57.366318 | orchestrator | ok: [testbed-node-4] 2026-03-26 05:35:57.366325 | orchestrator | ok: [testbed-node-5] 2026-03-26 05:35:57.366332 | orchestrator | 2026-03-26 05:35:57.366339 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-26 05:35:57.366346 | orchestrator | Thursday 26 March 2026 05:35:55 +0000 (0:00:01.787) 0:33:19.274 ******** 2026-03-26 05:35:57.366354 | orchestrator | ok: [testbed-node-3] 2026-03-26 05:35:57.366361 | orchestrator | ok: [testbed-node-4] 2026-03-26 05:35:57.366368 | orchestrator | ok: [testbed-node-5] 2026-03-26 05:35:57.366375 | orchestrator | 2026-03-26 05:35:57.366387 | orchestrator | TASK [Get pool list] *********************************************************** 2026-03-26 05:36:03.671181 | orchestrator | Thursday 26 March 2026 05:35:57 +0000 (0:00:01.730) 0:33:21.004 ******** 2026-03-26 05:36:03.671321 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-26 05:36:03.671347 | orchestrator | 2026-03-26 05:36:03.671367 | orchestrator | TASK [Get balancer module status] ********************************************** 2026-03-26 05:36:03.671387 | orchestrator | Thursday 26 March 2026 05:36:00 +0000 (0:00:02.915) 0:33:23.920 ******** 2026-03-26 05:36:03.671407 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-26 05:36:03.671426 | orchestrator | 2026-03-26 05:36:03.671445 | orchestrator | TASK [Set_fact pools_pgautoscaler_mode] **************************************** 2026-03-26 05:36:03.671463 | orchestrator | Thursday 26 March 2026 05:36:03 +0000 (0:00:02.829) 0:33:26.749 ******** 2026-03-26 05:36:03.671492 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 1, 'pool_name': '.mgr', 'create_time': '2026-03-26T02:57:09.996917+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 2, 'min_size': 1, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 1, 'pg_placement_num': 1, 'pg_placement_num_target': 1, 'pg_num_target': 1, 'pg_num_pending': 1, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '20', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {'pg_num_max': 32, 'pg_num_min': 1}, 'application_metadata': {'mgr': {}}, 'read_balance': {'score_acting': 6.059999942779541, 'score_stable': 6.059999942779541, 'optimal_score': 0.33000001311302185, 'raw_score_acting': 2, 'raw_score_stable': 2, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-03-26 05:36:03.671596 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 2, 'pool_name': 'cephfs_data', 'create_time': '2026-03-26T02:58:21.868811+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '32', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '30', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'cephfs': {'data': 'cephfs'}}, 'read_balance': {'score_acting': 1.5, 'score_stable': 1.5, 'optimal_score': 1, 'raw_score_acting': 1.5, 'raw_score_stable': 1.5, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-03-26 05:36:03.671625 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 3, 'pool_name': 'cephfs_metadata', 'create_time': '2026-03-26T02:58:25.656609+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 16, 'pg_placement_num': 16, 'pg_placement_num_target': 16, 'pg_num_target': 16, 'pg_num_pending': 16, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '64', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '30', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {'pg_autoscale_bias': 4, 'pg_num_min': 16, 'recovery_priority': 5}, 'application_metadata': {'cephfs': {'metadata': 'cephfs'}}, 'read_balance': {'score_acting': 1.5, 'score_stable': 1.5, 'optimal_score': 1, 'raw_score_acting': 1.5, 'raw_score_stable': 1.5, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-03-26 05:36:03.671680 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 4, 'pool_name': 'default.rgw.buckets.data', 'create_time': '2026-03-26T02:59:25.200032+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '77', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '69', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.309999942779541, 'score_stable': 1.309999942779541, 'optimal_score': 1, 'raw_score_acting': 1.309999942779541, 'raw_score_stable': 1.309999942779541, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-03-26 05:36:04.120617 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 5, 'pool_name': 'default.rgw.buckets.index', 'create_time': '2026-03-26T02:59:31.330922+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '77', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '71', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.309999942779541, 'score_stable': 1.309999942779541, 'optimal_score': 1, 'raw_score_acting': 1.309999942779541, 'raw_score_stable': 1.309999942779541, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-03-26 05:36:04.120830 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 6, 'pool_name': 'default.rgw.control', 'create_time': '2026-03-26T02:59:37.488224+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '77', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '71', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.8799999952316284, 'score_stable': 1.8799999952316284, 'optimal_score': 1, 'raw_score_acting': 1.8799999952316284, 'raw_score_stable': 1.8799999952316284, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-03-26 05:36:04.120886 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 7, 'pool_name': 'default.rgw.log', 'create_time': '2026-03-26T02:59:43.721154+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '194', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '73', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.309999942779541, 'score_stable': 1.309999942779541, 'optimal_score': 1, 'raw_score_acting': 1.309999942779541, 'raw_score_stable': 1.309999942779541, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-03-26 05:36:04.120929 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 8, 'pool_name': 'default.rgw.meta', 'create_time': '2026-03-26T02:59:50.241088+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '77', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '73', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.690000057220459, 'score_stable': 1.690000057220459, 'optimal_score': 1, 'raw_score_acting': 1.690000057220459, 'raw_score_stable': 1.690000057220459, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-03-26 05:36:04.120968 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 9, 'pool_name': '.rgw.root', 'create_time': '2026-03-26T03:00:02.559162+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 2, 'min_size': 1, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '77', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '75', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.690000057220459, 'score_stable': 1.690000057220459, 'optimal_score': 1, 'raw_score_acting': 1.690000057220459, 'raw_score_stable': 1.690000057220459, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-03-26 05:36:04.901136 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 10, 'pool_name': 'backups', 'create_time': '2026-03-26T03:00:50.660723+0000', 'flags': 8193, 'flags_names': 'hashpspool,selfmanaged_snaps', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '102', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 3, 'snap_epoch': 102, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rbd': {}}, 'read_balance': {'score_acting': 2.059999942779541, 'score_stable': 2.059999942779541, 'optimal_score': 1, 'raw_score_acting': 2.059999942779541, 'raw_score_stable': 2.059999942779541, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-03-26 05:36:04.901252 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 11, 'pool_name': 'volumes', 'create_time': '2026-03-26T03:01:00.012041+0000', 'flags': 8193, 'flags_names': 'hashpspool,selfmanaged_snaps', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '109', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 3, 'snap_epoch': 109, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rbd': {}}, 'read_balance': {'score_acting': 1.309999942779541, 'score_stable': 1.309999942779541, 'optimal_score': 1, 'raw_score_acting': 1.309999942779541, 'raw_score_stable': 1.309999942779541, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-03-26 05:36:04.901295 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 12, 'pool_name': 'images', 'create_time': '2026-03-26T03:01:10.076355+0000', 'flags': 8193, 'flags_names': 'hashpspool,selfmanaged_snaps', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '204', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 6, 'snap_epoch': 204, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rbd': {}}, 'read_balance': {'score_acting': 1.309999942779541, 'score_stable': 1.309999942779541, 'optimal_score': 1, 'raw_score_acting': 1.309999942779541, 'raw_score_stable': 1.309999942779541, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-03-26 05:36:04.901313 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 13, 'pool_name': 'metrics', 'create_time': '2026-03-26T03:01:18.926005+0000', 'flags': 8193, 'flags_names': 'hashpspool,selfmanaged_snaps', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '125', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 3, 'snap_epoch': 125, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rbd': {}}, 'read_balance': {'score_acting': 1.690000057220459, 'score_stable': 1.690000057220459, 'optimal_score': 1, 'raw_score_acting': 1.690000057220459, 'raw_score_stable': 1.690000057220459, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-03-26 05:37:50.395379 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 14, 'pool_name': 'vms', 'create_time': '2026-03-26T03:01:27.854751+0000', 'flags': 8193, 'flags_names': 'hashpspool,selfmanaged_snaps', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '132', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 3, 'snap_epoch': 132, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rbd': {}}, 'read_balance': {'score_acting': 1.5, 'score_stable': 1.5, 'optimal_score': 1, 'raw_score_acting': 1.5, 'raw_score_stable': 1.5, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-03-26 05:37:50.395583 | orchestrator | 2026-03-26 05:37:50.395617 | orchestrator | TASK [Disable balancer] ******************************************************** 2026-03-26 05:37:50.395639 | orchestrator | Thursday 26 March 2026 05:36:05 +0000 (0:00:02.826) 0:33:29.575 ******** 2026-03-26 05:37:50.395658 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-26 05:37:50.395677 | orchestrator | 2026-03-26 05:37:50.395691 | orchestrator | TASK [Disable pg autoscale on pools] ******************************************* 2026-03-26 05:37:50.395702 | orchestrator | Thursday 26 March 2026 05:36:08 +0000 (0:00:03.027) 0:33:32.603 ******** 2026-03-26 05:37:50.395713 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': '.mgr', 'mode': 'on'}) 2026-03-26 05:37:50.395726 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'cephfs_data', 'mode': 'on'}) 2026-03-26 05:37:50.395737 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'cephfs_metadata', 'mode': 'on'}) 2026-03-26 05:37:50.395774 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.buckets.data', 'mode': 'on'}) 2026-03-26 05:37:50.395788 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.buckets.index', 'mode': 'on'}) 2026-03-26 05:37:50.395799 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.control', 'mode': 'on'}) 2026-03-26 05:37:50.395810 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.log', 'mode': 'on'}) 2026-03-26 05:37:50.395820 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.meta', 'mode': 'on'}) 2026-03-26 05:37:50.395831 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': '.rgw.root', 'mode': 'on'}) 2026-03-26 05:37:50.395842 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'backups', 'mode': 'off'})  2026-03-26 05:37:50.395853 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'volumes', 'mode': 'off'})  2026-03-26 05:37:50.395863 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'images', 'mode': 'off'})  2026-03-26 05:37:50.395874 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'metrics', 'mode': 'off'})  2026-03-26 05:37:50.395885 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vms', 'mode': 'off'})  2026-03-26 05:37:50.395897 | orchestrator | 2026-03-26 05:37:50.395909 | orchestrator | TASK [Set osd flags] *********************************************************** 2026-03-26 05:37:50.395923 | orchestrator | Thursday 26 March 2026 05:37:23 +0000 (0:01:14.542) 0:34:47.145 ******** 2026-03-26 05:37:50.395954 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=noout) 2026-03-26 05:37:50.395968 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=nodeep-scrub) 2026-03-26 05:37:50.395981 | orchestrator | 2026-03-26 05:37:50.395993 | orchestrator | PLAY [Upgrade ceph osds cluster] *********************************************** 2026-03-26 05:37:50.396005 | orchestrator | 2026-03-26 05:37:50.396026 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-26 05:37:50.396038 | orchestrator | Thursday 26 March 2026 05:37:29 +0000 (0:00:06.111) 0:34:53.257 ******** 2026-03-26 05:37:50.396050 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3 2026-03-26 05:37:50.396062 | orchestrator | 2026-03-26 05:37:50.396075 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-26 05:37:50.396087 | orchestrator | Thursday 26 March 2026 05:37:30 +0000 (0:00:01.142) 0:34:54.399 ******** 2026-03-26 05:37:50.396099 | orchestrator | ok: [testbed-node-3] 2026-03-26 05:37:50.396112 | orchestrator | 2026-03-26 05:37:50.396124 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-26 05:37:50.396136 | orchestrator | Thursday 26 March 2026 05:37:32 +0000 (0:00:01.510) 0:34:55.910 ******** 2026-03-26 05:37:50.396148 | orchestrator | ok: [testbed-node-3] 2026-03-26 05:37:50.396160 | orchestrator | 2026-03-26 05:37:50.396173 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-26 05:37:50.396184 | orchestrator | Thursday 26 March 2026 05:37:33 +0000 (0:00:01.132) 0:34:57.043 ******** 2026-03-26 05:37:50.396196 | orchestrator | ok: [testbed-node-3] 2026-03-26 05:37:50.396209 | orchestrator | 2026-03-26 05:37:50.396221 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-26 05:37:50.396233 | orchestrator | Thursday 26 March 2026 05:37:34 +0000 (0:00:01.490) 0:34:58.534 ******** 2026-03-26 05:37:50.396245 | orchestrator | ok: [testbed-node-3] 2026-03-26 05:37:50.396256 | orchestrator | 2026-03-26 05:37:50.396266 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-26 05:37:50.396277 | orchestrator | Thursday 26 March 2026 05:37:36 +0000 (0:00:01.169) 0:34:59.703 ******** 2026-03-26 05:37:50.396288 | orchestrator | ok: [testbed-node-3] 2026-03-26 05:37:50.396308 | orchestrator | 2026-03-26 05:37:50.396319 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-26 05:37:50.396329 | orchestrator | Thursday 26 March 2026 05:37:37 +0000 (0:00:01.138) 0:35:00.842 ******** 2026-03-26 05:37:50.396340 | orchestrator | ok: [testbed-node-3] 2026-03-26 05:37:50.396351 | orchestrator | 2026-03-26 05:37:50.396361 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-26 05:37:50.396373 | orchestrator | Thursday 26 March 2026 05:37:38 +0000 (0:00:01.145) 0:35:01.987 ******** 2026-03-26 05:37:50.396383 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:37:50.396394 | orchestrator | 2026-03-26 05:37:50.396405 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-26 05:37:50.396416 | orchestrator | Thursday 26 March 2026 05:37:39 +0000 (0:00:01.166) 0:35:03.153 ******** 2026-03-26 05:37:50.396427 | orchestrator | ok: [testbed-node-3] 2026-03-26 05:37:50.396438 | orchestrator | 2026-03-26 05:37:50.396449 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-26 05:37:50.396459 | orchestrator | Thursday 26 March 2026 05:37:40 +0000 (0:00:01.136) 0:35:04.290 ******** 2026-03-26 05:37:50.396470 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-26 05:37:50.396481 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-26 05:37:50.396533 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-26 05:37:50.396546 | orchestrator | 2026-03-26 05:37:50.396557 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-26 05:37:50.396568 | orchestrator | Thursday 26 March 2026 05:37:42 +0000 (0:00:01.695) 0:35:05.986 ******** 2026-03-26 05:37:50.396579 | orchestrator | ok: [testbed-node-3] 2026-03-26 05:37:50.396589 | orchestrator | 2026-03-26 05:37:50.396600 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-26 05:37:50.396611 | orchestrator | Thursday 26 March 2026 05:37:43 +0000 (0:00:01.308) 0:35:07.294 ******** 2026-03-26 05:37:50.396622 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-26 05:37:50.396632 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-26 05:37:50.396643 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-26 05:37:50.396654 | orchestrator | 2026-03-26 05:37:50.396665 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-26 05:37:50.396675 | orchestrator | Thursday 26 March 2026 05:37:46 +0000 (0:00:03.319) 0:35:10.613 ******** 2026-03-26 05:37:50.396686 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-26 05:37:50.396697 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-26 05:37:50.396708 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-26 05:37:50.396718 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:37:50.396729 | orchestrator | 2026-03-26 05:37:50.396740 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-26 05:37:50.396751 | orchestrator | Thursday 26 March 2026 05:37:48 +0000 (0:00:01.439) 0:35:12.053 ******** 2026-03-26 05:37:50.396763 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-26 05:37:50.396784 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-26 05:38:11.009223 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-26 05:38:11.009367 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:38:11.009385 | orchestrator | 2026-03-26 05:38:11.009398 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-26 05:38:11.009410 | orchestrator | Thursday 26 March 2026 05:37:50 +0000 (0:00:01.989) 0:35:14.043 ******** 2026-03-26 05:38:11.009424 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-26 05:38:11.009439 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-26 05:38:11.009500 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-26 05:38:11.009514 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:38:11.009526 | orchestrator | 2026-03-26 05:38:11.009537 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-26 05:38:11.009548 | orchestrator | Thursday 26 March 2026 05:37:51 +0000 (0:00:01.182) 0:35:15.226 ******** 2026-03-26 05:38:11.009560 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'de9c3b4c4c57', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-26 05:37:44.202827', 'end': '2026-03-26 05:37:44.263426', 'delta': '0:00:00.060599', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['de9c3b4c4c57'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-26 05:38:11.009576 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'd66b87272f8e', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-26 05:37:44.808591', 'end': '2026-03-26 05:37:44.855653', 'delta': '0:00:00.047062', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['d66b87272f8e'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-26 05:38:11.009605 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'b850f8fd4697', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-26 05:37:45.729323', 'end': '2026-03-26 05:37:45.782176', 'delta': '0:00:00.052853', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['b850f8fd4697'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-26 05:38:11.009627 | orchestrator | 2026-03-26 05:38:11.009638 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-26 05:38:11.009656 | orchestrator | Thursday 26 March 2026 05:37:52 +0000 (0:00:01.196) 0:35:16.423 ******** 2026-03-26 05:38:11.009668 | orchestrator | ok: [testbed-node-3] 2026-03-26 05:38:11.009680 | orchestrator | 2026-03-26 05:38:11.009691 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-26 05:38:11.009702 | orchestrator | Thursday 26 March 2026 05:37:54 +0000 (0:00:01.672) 0:35:18.095 ******** 2026-03-26 05:38:11.009712 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:38:11.009723 | orchestrator | 2026-03-26 05:38:11.009736 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-26 05:38:11.009748 | orchestrator | Thursday 26 March 2026 05:37:55 +0000 (0:00:01.266) 0:35:19.362 ******** 2026-03-26 05:38:11.009760 | orchestrator | ok: [testbed-node-3] 2026-03-26 05:38:11.009773 | orchestrator | 2026-03-26 05:38:11.009786 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-26 05:38:11.009798 | orchestrator | Thursday 26 March 2026 05:37:56 +0000 (0:00:01.164) 0:35:20.527 ******** 2026-03-26 05:38:11.009811 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-26 05:38:11.009824 | orchestrator | 2026-03-26 05:38:11.009837 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-26 05:38:11.009849 | orchestrator | Thursday 26 March 2026 05:37:58 +0000 (0:00:01.979) 0:35:22.507 ******** 2026-03-26 05:38:11.009862 | orchestrator | ok: [testbed-node-3] 2026-03-26 05:38:11.009874 | orchestrator | 2026-03-26 05:38:11.009886 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-26 05:38:11.009898 | orchestrator | Thursday 26 March 2026 05:38:00 +0000 (0:00:01.155) 0:35:23.663 ******** 2026-03-26 05:38:11.009912 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:38:11.009924 | orchestrator | 2026-03-26 05:38:11.009936 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-26 05:38:11.009949 | orchestrator | Thursday 26 March 2026 05:38:01 +0000 (0:00:01.146) 0:35:24.809 ******** 2026-03-26 05:38:11.009961 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:38:11.009974 | orchestrator | 2026-03-26 05:38:11.009987 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-26 05:38:11.010000 | orchestrator | Thursday 26 March 2026 05:38:02 +0000 (0:00:01.225) 0:35:26.034 ******** 2026-03-26 05:38:11.010013 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:38:11.010088 | orchestrator | 2026-03-26 05:38:11.010101 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-26 05:38:11.010113 | orchestrator | Thursday 26 March 2026 05:38:03 +0000 (0:00:01.121) 0:35:27.155 ******** 2026-03-26 05:38:11.010124 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:38:11.010135 | orchestrator | 2026-03-26 05:38:11.010157 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-26 05:38:11.010167 | orchestrator | Thursday 26 March 2026 05:38:04 +0000 (0:00:01.114) 0:35:28.270 ******** 2026-03-26 05:38:11.010178 | orchestrator | ok: [testbed-node-3] 2026-03-26 05:38:11.010189 | orchestrator | 2026-03-26 05:38:11.010200 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-26 05:38:11.010210 | orchestrator | Thursday 26 March 2026 05:38:05 +0000 (0:00:01.199) 0:35:29.469 ******** 2026-03-26 05:38:11.010221 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:38:11.010232 | orchestrator | 2026-03-26 05:38:11.010243 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-26 05:38:11.010253 | orchestrator | Thursday 26 March 2026 05:38:06 +0000 (0:00:01.152) 0:35:30.622 ******** 2026-03-26 05:38:11.010264 | orchestrator | ok: [testbed-node-3] 2026-03-26 05:38:11.010283 | orchestrator | 2026-03-26 05:38:11.010294 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-26 05:38:11.010305 | orchestrator | Thursday 26 March 2026 05:38:08 +0000 (0:00:01.200) 0:35:31.823 ******** 2026-03-26 05:38:11.010316 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:38:11.010327 | orchestrator | 2026-03-26 05:38:11.010337 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-26 05:38:11.010349 | orchestrator | Thursday 26 March 2026 05:38:09 +0000 (0:00:01.118) 0:35:32.941 ******** 2026-03-26 05:38:11.010360 | orchestrator | ok: [testbed-node-3] 2026-03-26 05:38:11.010371 | orchestrator | 2026-03-26 05:38:11.010381 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-26 05:38:11.010392 | orchestrator | Thursday 26 March 2026 05:38:10 +0000 (0:00:01.135) 0:35:34.076 ******** 2026-03-26 05:38:11.010404 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:38:11.010424 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--e2623153--bc41--510f--8884--ef957bb96082-osd--block--e2623153--bc41--510f--8884--ef957bb96082', 'dm-uuid-LVM-8hKVl461SF70Ai5uMDmNdT5BP20Vvkg8AxHs2aTbdloCZd5zRhurro2iqvFnFzRY'], 'uuids': ['c579629d-afc9-41d5-a76c-63e3abbafb40'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '863ba5d2', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['AxHs2a-Tbdl-oCZd-5zRh-urro-2iqv-FnFzRY']}})  2026-03-26 05:38:11.012811 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2dae49df-17cb-48b5-9940-ec5e7ec792d8', 'scsi-SQEMU_QEMU_HARDDISK_2dae49df-17cb-48b5-9940-ec5e7ec792d8'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2dae49df', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-26 05:38:11.012872 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-2XKfyD-kvYx-XaUk-IA1D-OFMu-auWL-FeQHCw', 'scsi-0QEMU_QEMU_HARDDISK_d11e4e4a-db1d-44df-8da9-5de7e993dd80', 'scsi-SQEMU_QEMU_HARDDISK_d11e4e4a-db1d-44df-8da9-5de7e993dd80'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'd11e4e4a', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--93e8c9a2--b6ff--5fe0--a79e--2922336c3e0a-osd--block--93e8c9a2--b6ff--5fe0--a79e--2922336c3e0a']}})  2026-03-26 05:38:11.012888 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:38:11.012904 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:38:11.012931 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-26-01-38-13-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-26 05:38:11.012944 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:38:11.012955 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-y5xuVg-NQ7T-0MWa-shc4-xC7n-HJ3V-UNBCRS', 'dm-uuid-CRYPT-LUKS2-aef43475035b4229a7d71e3432ab4dcb-y5xuVg-NQ7T-0MWa-shc4-xC7n-HJ3V-UNBCRS'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-26 05:38:11.012985 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:38:11.012997 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--93e8c9a2--b6ff--5fe0--a79e--2922336c3e0a-osd--block--93e8c9a2--b6ff--5fe0--a79e--2922336c3e0a', 'dm-uuid-LVM-NfuOn4R5AkCZoZBaGfCwjgSejX4qlSlby5xuVgNQ7T0MWashc4xC7nHJ3VUNBCRS'], 'uuids': ['aef43475-035b-4229-a7d7-1e3432ab4dcb'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'd11e4e4a', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['y5xuVg-NQ7T-0MWa-shc4-xC7n-HJ3V-UNBCRS']}})  2026-03-26 05:38:11.013010 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-dxNnp3-HdCF-97hz-w17k-bHEu-opcA-g4y34j', 'scsi-0QEMU_QEMU_HARDDISK_863ba5d2-7e2f-4393-95a6-83543745d331', 'scsi-SQEMU_QEMU_HARDDISK_863ba5d2-7e2f-4393-95a6-83543745d331'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '863ba5d2', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--e2623153--bc41--510f--8884--ef957bb96082-osd--block--e2623153--bc41--510f--8884--ef957bb96082']}})  2026-03-26 05:38:11.013028 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:38:11.013057 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce600cf2-62c4-44aa-8248-5535335c6519', 'scsi-SQEMU_QEMU_HARDDISK_ce600cf2-62c4-44aa-8248-5535335c6519'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'ce600cf2', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce600cf2-62c4-44aa-8248-5535335c6519-part16', 'scsi-SQEMU_QEMU_HARDDISK_ce600cf2-62c4-44aa-8248-5535335c6519-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce600cf2-62c4-44aa-8248-5535335c6519-part14', 'scsi-SQEMU_QEMU_HARDDISK_ce600cf2-62c4-44aa-8248-5535335c6519-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce600cf2-62c4-44aa-8248-5535335c6519-part15', 'scsi-SQEMU_QEMU_HARDDISK_ce600cf2-62c4-44aa-8248-5535335c6519-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce600cf2-62c4-44aa-8248-5535335c6519-part1', 'scsi-SQEMU_QEMU_HARDDISK_ce600cf2-62c4-44aa-8248-5535335c6519-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-26 05:38:12.361634 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:38:12.361697 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:38:12.361707 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-AxHs2a-Tbdl-oCZd-5zRh-urro-2iqv-FnFzRY', 'dm-uuid-CRYPT-LUKS2-c579629dafc941d5a76c63e3abbafb40-AxHs2a-Tbdl-oCZd-5zRh-urro-2iqv-FnFzRY'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-26 05:38:12.361728 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:38:12.361736 | orchestrator | 2026-03-26 05:38:12.361742 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-26 05:38:12.361748 | orchestrator | Thursday 26 March 2026 05:38:12 +0000 (0:00:01.740) 0:35:35.817 ******** 2026-03-26 05:38:12.361755 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:38:12.361762 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--e2623153--bc41--510f--8884--ef957bb96082-osd--block--e2623153--bc41--510f--8884--ef957bb96082', 'dm-uuid-LVM-8hKVl461SF70Ai5uMDmNdT5BP20Vvkg8AxHs2aTbdloCZd5zRhurro2iqvFnFzRY'], 'uuids': ['c579629d-afc9-41d5-a76c-63e3abbafb40'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '863ba5d2', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['AxHs2a-Tbdl-oCZd-5zRh-urro-2iqv-FnFzRY']}}, 'ansible_loop_var': 'item'})  2026-03-26 05:38:12.361769 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2dae49df-17cb-48b5-9940-ec5e7ec792d8', 'scsi-SQEMU_QEMU_HARDDISK_2dae49df-17cb-48b5-9940-ec5e7ec792d8'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2dae49df', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:38:12.361796 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-2XKfyD-kvYx-XaUk-IA1D-OFMu-auWL-FeQHCw', 'scsi-0QEMU_QEMU_HARDDISK_d11e4e4a-db1d-44df-8da9-5de7e993dd80', 'scsi-SQEMU_QEMU_HARDDISK_d11e4e4a-db1d-44df-8da9-5de7e993dd80'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'd11e4e4a', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--93e8c9a2--b6ff--5fe0--a79e--2922336c3e0a-osd--block--93e8c9a2--b6ff--5fe0--a79e--2922336c3e0a']}}, 'ansible_loop_var': 'item'})  2026-03-26 05:38:12.361811 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:38:12.361828 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:38:12.361840 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-26-01-38-13-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:38:12.361851 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:38:12.361875 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-y5xuVg-NQ7T-0MWa-shc4-xC7n-HJ3V-UNBCRS', 'dm-uuid-CRYPT-LUKS2-aef43475035b4229a7d71e3432ab4dcb-y5xuVg-NQ7T-0MWa-shc4-xC7n-HJ3V-UNBCRS'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:38:17.633790 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:38:17.633917 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--93e8c9a2--b6ff--5fe0--a79e--2922336c3e0a-osd--block--93e8c9a2--b6ff--5fe0--a79e--2922336c3e0a', 'dm-uuid-LVM-NfuOn4R5AkCZoZBaGfCwjgSejX4qlSlby5xuVgNQ7T0MWashc4xC7nHJ3VUNBCRS'], 'uuids': ['aef43475-035b-4229-a7d7-1e3432ab4dcb'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'd11e4e4a', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['y5xuVg-NQ7T-0MWa-shc4-xC7n-HJ3V-UNBCRS']}}, 'ansible_loop_var': 'item'})  2026-03-26 05:38:17.633968 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-dxNnp3-HdCF-97hz-w17k-bHEu-opcA-g4y34j', 'scsi-0QEMU_QEMU_HARDDISK_863ba5d2-7e2f-4393-95a6-83543745d331', 'scsi-SQEMU_QEMU_HARDDISK_863ba5d2-7e2f-4393-95a6-83543745d331'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '863ba5d2', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--e2623153--bc41--510f--8884--ef957bb96082-osd--block--e2623153--bc41--510f--8884--ef957bb96082']}}, 'ansible_loop_var': 'item'})  2026-03-26 05:38:17.633991 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:38:17.634114 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce600cf2-62c4-44aa-8248-5535335c6519', 'scsi-SQEMU_QEMU_HARDDISK_ce600cf2-62c4-44aa-8248-5535335c6519'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'ce600cf2', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce600cf2-62c4-44aa-8248-5535335c6519-part16', 'scsi-SQEMU_QEMU_HARDDISK_ce600cf2-62c4-44aa-8248-5535335c6519-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce600cf2-62c4-44aa-8248-5535335c6519-part14', 'scsi-SQEMU_QEMU_HARDDISK_ce600cf2-62c4-44aa-8248-5535335c6519-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce600cf2-62c4-44aa-8248-5535335c6519-part15', 'scsi-SQEMU_QEMU_HARDDISK_ce600cf2-62c4-44aa-8248-5535335c6519-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce600cf2-62c4-44aa-8248-5535335c6519-part1', 'scsi-SQEMU_QEMU_HARDDISK_ce600cf2-62c4-44aa-8248-5535335c6519-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:38:17.634149 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:38:17.634168 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:38:17.634187 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-AxHs2a-Tbdl-oCZd-5zRh-urro-2iqv-FnFzRY', 'dm-uuid-CRYPT-LUKS2-c579629dafc941d5a76c63e3abbafb40-AxHs2a-Tbdl-oCZd-5zRh-urro-2iqv-FnFzRY'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:38:17.634207 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:38:17.634227 | orchestrator | 2026-03-26 05:38:17.634247 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-26 05:38:17.634266 | orchestrator | Thursday 26 March 2026 05:38:13 +0000 (0:00:01.360) 0:35:37.178 ******** 2026-03-26 05:38:17.634285 | orchestrator | ok: [testbed-node-3] 2026-03-26 05:38:17.634304 | orchestrator | 2026-03-26 05:38:17.634322 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-26 05:38:17.634341 | orchestrator | Thursday 26 March 2026 05:38:15 +0000 (0:00:01.523) 0:35:38.701 ******** 2026-03-26 05:38:17.634358 | orchestrator | ok: [testbed-node-3] 2026-03-26 05:38:17.634393 | orchestrator | 2026-03-26 05:38:17.634411 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-26 05:38:17.634461 | orchestrator | Thursday 26 March 2026 05:38:16 +0000 (0:00:01.114) 0:35:39.816 ******** 2026-03-26 05:38:17.634486 | orchestrator | ok: [testbed-node-3] 2026-03-26 05:38:17.634503 | orchestrator | 2026-03-26 05:38:17.634520 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-26 05:38:17.634548 | orchestrator | Thursday 26 March 2026 05:38:17 +0000 (0:00:01.470) 0:35:41.286 ******** 2026-03-26 05:39:02.065762 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:39:02.065879 | orchestrator | 2026-03-26 05:39:02.065896 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-26 05:39:02.065933 | orchestrator | Thursday 26 March 2026 05:38:18 +0000 (0:00:01.100) 0:35:42.386 ******** 2026-03-26 05:39:02.065945 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:39:02.065956 | orchestrator | 2026-03-26 05:39:02.065968 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-26 05:39:02.065979 | orchestrator | Thursday 26 March 2026 05:38:19 +0000 (0:00:01.257) 0:35:43.644 ******** 2026-03-26 05:39:02.065990 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:39:02.066000 | orchestrator | 2026-03-26 05:39:02.066011 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-26 05:39:02.066087 | orchestrator | Thursday 26 March 2026 05:38:21 +0000 (0:00:01.126) 0:35:44.770 ******** 2026-03-26 05:39:02.066100 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-03-26 05:39:02.066111 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-03-26 05:39:02.066122 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-03-26 05:39:02.066133 | orchestrator | 2026-03-26 05:39:02.066144 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-26 05:39:02.066155 | orchestrator | Thursday 26 March 2026 05:38:23 +0000 (0:00:02.083) 0:35:46.854 ******** 2026-03-26 05:39:02.066165 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-26 05:39:02.066176 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-26 05:39:02.066187 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-26 05:39:02.066197 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:39:02.066208 | orchestrator | 2026-03-26 05:39:02.066219 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-26 05:39:02.066230 | orchestrator | Thursday 26 March 2026 05:38:24 +0000 (0:00:01.240) 0:35:48.095 ******** 2026-03-26 05:39:02.066240 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3 2026-03-26 05:39:02.066252 | orchestrator | 2026-03-26 05:39:02.066264 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-26 05:39:02.066278 | orchestrator | Thursday 26 March 2026 05:38:25 +0000 (0:00:01.138) 0:35:49.233 ******** 2026-03-26 05:39:02.066291 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:39:02.066303 | orchestrator | 2026-03-26 05:39:02.066317 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-26 05:39:02.066329 | orchestrator | Thursday 26 March 2026 05:38:26 +0000 (0:00:01.171) 0:35:50.404 ******** 2026-03-26 05:39:02.066342 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:39:02.066354 | orchestrator | 2026-03-26 05:39:02.066402 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-26 05:39:02.066422 | orchestrator | Thursday 26 March 2026 05:38:27 +0000 (0:00:01.135) 0:35:51.539 ******** 2026-03-26 05:39:02.066435 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:39:02.066447 | orchestrator | 2026-03-26 05:39:02.066460 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-26 05:39:02.066473 | orchestrator | Thursday 26 March 2026 05:38:29 +0000 (0:00:01.169) 0:35:52.709 ******** 2026-03-26 05:39:02.066486 | orchestrator | ok: [testbed-node-3] 2026-03-26 05:39:02.066498 | orchestrator | 2026-03-26 05:39:02.066511 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-26 05:39:02.066524 | orchestrator | Thursday 26 March 2026 05:38:30 +0000 (0:00:01.234) 0:35:53.944 ******** 2026-03-26 05:39:02.066537 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-26 05:39:02.066550 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-26 05:39:02.066562 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-26 05:39:02.066575 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:39:02.066588 | orchestrator | 2026-03-26 05:39:02.066601 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-26 05:39:02.066613 | orchestrator | Thursday 26 March 2026 05:38:31 +0000 (0:00:01.411) 0:35:55.355 ******** 2026-03-26 05:39:02.066634 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-26 05:39:02.066645 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-26 05:39:02.066656 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-26 05:39:02.066667 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:39:02.066677 | orchestrator | 2026-03-26 05:39:02.066688 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-26 05:39:02.066699 | orchestrator | Thursday 26 March 2026 05:38:33 +0000 (0:00:01.424) 0:35:56.779 ******** 2026-03-26 05:39:02.066710 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-26 05:39:02.066721 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-26 05:39:02.066732 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-26 05:39:02.066742 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:39:02.066753 | orchestrator | 2026-03-26 05:39:02.066764 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-26 05:39:02.066774 | orchestrator | Thursday 26 March 2026 05:38:34 +0000 (0:00:01.373) 0:35:58.153 ******** 2026-03-26 05:39:02.066785 | orchestrator | ok: [testbed-node-3] 2026-03-26 05:39:02.066796 | orchestrator | 2026-03-26 05:39:02.066806 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-26 05:39:02.066817 | orchestrator | Thursday 26 March 2026 05:38:35 +0000 (0:00:01.143) 0:35:59.296 ******** 2026-03-26 05:39:02.066828 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-26 05:39:02.066838 | orchestrator | 2026-03-26 05:39:02.066862 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-26 05:39:02.066874 | orchestrator | Thursday 26 March 2026 05:38:36 +0000 (0:00:01.352) 0:36:00.649 ******** 2026-03-26 05:39:02.066904 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-26 05:39:02.066915 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-26 05:39:02.066926 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-26 05:39:02.066937 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-26 05:39:02.066948 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-26 05:39:02.066958 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-26 05:39:02.066969 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-26 05:39:02.066979 | orchestrator | 2026-03-26 05:39:02.066990 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-26 05:39:02.067000 | orchestrator | Thursday 26 March 2026 05:38:39 +0000 (0:00:02.178) 0:36:02.827 ******** 2026-03-26 05:39:02.067011 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-26 05:39:02.067021 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-26 05:39:02.067032 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-26 05:39:02.067043 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-26 05:39:02.067053 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-26 05:39:02.067064 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-26 05:39:02.067074 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-26 05:39:02.067085 | orchestrator | 2026-03-26 05:39:02.067096 | orchestrator | TASK [Get osd numbers - non container] ***************************************** 2026-03-26 05:39:02.067106 | orchestrator | Thursday 26 March 2026 05:38:41 +0000 (0:00:02.664) 0:36:05.492 ******** 2026-03-26 05:39:02.067117 | orchestrator | ok: [testbed-node-3] 2026-03-26 05:39:02.067128 | orchestrator | 2026-03-26 05:39:02.067138 | orchestrator | TASK [Set num_osds] ************************************************************ 2026-03-26 05:39:02.067158 | orchestrator | Thursday 26 March 2026 05:38:43 +0000 (0:00:01.525) 0:36:07.018 ******** 2026-03-26 05:39:02.067169 | orchestrator | ok: [testbed-node-3] 2026-03-26 05:39:02.067180 | orchestrator | 2026-03-26 05:39:02.067190 | orchestrator | TASK [Set_fact container_exec_cmd_osd] ***************************************** 2026-03-26 05:39:02.067201 | orchestrator | Thursday 26 March 2026 05:38:44 +0000 (0:00:01.180) 0:36:08.198 ******** 2026-03-26 05:39:02.067211 | orchestrator | ok: [testbed-node-3] 2026-03-26 05:39:02.067222 | orchestrator | 2026-03-26 05:39:02.067233 | orchestrator | TASK [Stop ceph osd] *********************************************************** 2026-03-26 05:39:02.067243 | orchestrator | Thursday 26 March 2026 05:38:46 +0000 (0:00:01.655) 0:36:09.853 ******** 2026-03-26 05:39:02.067254 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-03-26 05:39:02.067265 | orchestrator | changed: [testbed-node-3] => (item=3) 2026-03-26 05:39:02.067275 | orchestrator | 2026-03-26 05:39:02.067286 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-26 05:39:02.067297 | orchestrator | Thursday 26 March 2026 05:38:50 +0000 (0:00:04.297) 0:36:14.150 ******** 2026-03-26 05:39:02.067307 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3 2026-03-26 05:39:02.067318 | orchestrator | 2026-03-26 05:39:02.067329 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-26 05:39:02.067339 | orchestrator | Thursday 26 March 2026 05:38:51 +0000 (0:00:01.179) 0:36:15.330 ******** 2026-03-26 05:39:02.067350 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3 2026-03-26 05:39:02.067382 | orchestrator | 2026-03-26 05:39:02.067395 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-26 05:39:02.067406 | orchestrator | Thursday 26 March 2026 05:38:52 +0000 (0:00:01.143) 0:36:16.473 ******** 2026-03-26 05:39:02.067416 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:39:02.067427 | orchestrator | 2026-03-26 05:39:02.067438 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-26 05:39:02.067448 | orchestrator | Thursday 26 March 2026 05:38:54 +0000 (0:00:01.211) 0:36:17.684 ******** 2026-03-26 05:39:02.067459 | orchestrator | ok: [testbed-node-3] 2026-03-26 05:39:02.067470 | orchestrator | 2026-03-26 05:39:02.067480 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-26 05:39:02.067491 | orchestrator | Thursday 26 March 2026 05:38:55 +0000 (0:00:01.518) 0:36:19.202 ******** 2026-03-26 05:39:02.067502 | orchestrator | ok: [testbed-node-3] 2026-03-26 05:39:02.067512 | orchestrator | 2026-03-26 05:39:02.067523 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-26 05:39:02.067534 | orchestrator | Thursday 26 March 2026 05:38:57 +0000 (0:00:01.483) 0:36:20.686 ******** 2026-03-26 05:39:02.067544 | orchestrator | ok: [testbed-node-3] 2026-03-26 05:39:02.067555 | orchestrator | 2026-03-26 05:39:02.067566 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-26 05:39:02.067576 | orchestrator | Thursday 26 March 2026 05:38:58 +0000 (0:00:01.503) 0:36:22.190 ******** 2026-03-26 05:39:02.067587 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:39:02.067598 | orchestrator | 2026-03-26 05:39:02.067608 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-26 05:39:02.067619 | orchestrator | Thursday 26 March 2026 05:38:59 +0000 (0:00:01.219) 0:36:23.410 ******** 2026-03-26 05:39:02.067630 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:39:02.067640 | orchestrator | 2026-03-26 05:39:02.067657 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-26 05:39:02.067668 | orchestrator | Thursday 26 March 2026 05:39:00 +0000 (0:00:01.172) 0:36:24.582 ******** 2026-03-26 05:39:02.067679 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:39:02.067689 | orchestrator | 2026-03-26 05:39:02.067707 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-26 05:39:52.499560 | orchestrator | Thursday 26 March 2026 05:39:02 +0000 (0:00:01.125) 0:36:25.708 ******** 2026-03-26 05:39:52.499693 | orchestrator | ok: [testbed-node-3] 2026-03-26 05:39:52.499709 | orchestrator | 2026-03-26 05:39:52.499720 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-26 05:39:52.499730 | orchestrator | Thursday 26 March 2026 05:39:03 +0000 (0:00:01.514) 0:36:27.222 ******** 2026-03-26 05:39:52.499740 | orchestrator | ok: [testbed-node-3] 2026-03-26 05:39:52.499749 | orchestrator | 2026-03-26 05:39:52.499759 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-26 05:39:52.499768 | orchestrator | Thursday 26 March 2026 05:39:05 +0000 (0:00:01.505) 0:36:28.727 ******** 2026-03-26 05:39:52.499778 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:39:52.499789 | orchestrator | 2026-03-26 05:39:52.499798 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-26 05:39:52.499808 | orchestrator | Thursday 26 March 2026 05:39:06 +0000 (0:00:01.160) 0:36:29.888 ******** 2026-03-26 05:39:52.499817 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:39:52.499827 | orchestrator | 2026-03-26 05:39:52.499836 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-26 05:39:52.499846 | orchestrator | Thursday 26 March 2026 05:39:07 +0000 (0:00:01.140) 0:36:31.029 ******** 2026-03-26 05:39:52.499855 | orchestrator | ok: [testbed-node-3] 2026-03-26 05:39:52.499864 | orchestrator | 2026-03-26 05:39:52.499874 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-26 05:39:52.499883 | orchestrator | Thursday 26 March 2026 05:39:08 +0000 (0:00:01.157) 0:36:32.187 ******** 2026-03-26 05:39:52.499892 | orchestrator | ok: [testbed-node-3] 2026-03-26 05:39:52.499902 | orchestrator | 2026-03-26 05:39:52.499912 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-26 05:39:52.499921 | orchestrator | Thursday 26 March 2026 05:39:09 +0000 (0:00:01.181) 0:36:33.368 ******** 2026-03-26 05:39:52.499930 | orchestrator | ok: [testbed-node-3] 2026-03-26 05:39:52.499940 | orchestrator | 2026-03-26 05:39:52.499949 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-26 05:39:52.499960 | orchestrator | Thursday 26 March 2026 05:39:10 +0000 (0:00:01.148) 0:36:34.517 ******** 2026-03-26 05:39:52.499969 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:39:52.499979 | orchestrator | 2026-03-26 05:39:52.499988 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-26 05:39:52.499997 | orchestrator | Thursday 26 March 2026 05:39:11 +0000 (0:00:01.102) 0:36:35.620 ******** 2026-03-26 05:39:52.500007 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:39:52.500016 | orchestrator | 2026-03-26 05:39:52.500026 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-26 05:39:52.500035 | orchestrator | Thursday 26 March 2026 05:39:13 +0000 (0:00:01.140) 0:36:36.760 ******** 2026-03-26 05:39:52.500044 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:39:52.500053 | orchestrator | 2026-03-26 05:39:52.500063 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-26 05:39:52.500072 | orchestrator | Thursday 26 March 2026 05:39:14 +0000 (0:00:01.137) 0:36:37.898 ******** 2026-03-26 05:39:52.500082 | orchestrator | ok: [testbed-node-3] 2026-03-26 05:39:52.500091 | orchestrator | 2026-03-26 05:39:52.500103 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-26 05:39:52.500114 | orchestrator | Thursday 26 March 2026 05:39:15 +0000 (0:00:01.200) 0:36:39.098 ******** 2026-03-26 05:39:52.500125 | orchestrator | ok: [testbed-node-3] 2026-03-26 05:39:52.500135 | orchestrator | 2026-03-26 05:39:52.500146 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-03-26 05:39:52.500157 | orchestrator | Thursday 26 March 2026 05:39:16 +0000 (0:00:01.166) 0:36:40.265 ******** 2026-03-26 05:39:52.500167 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:39:52.500178 | orchestrator | 2026-03-26 05:39:52.500188 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-03-26 05:39:52.500199 | orchestrator | Thursday 26 March 2026 05:39:17 +0000 (0:00:01.212) 0:36:41.477 ******** 2026-03-26 05:39:52.500216 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:39:52.500227 | orchestrator | 2026-03-26 05:39:52.500238 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-03-26 05:39:52.500248 | orchestrator | Thursday 26 March 2026 05:39:18 +0000 (0:00:01.155) 0:36:42.633 ******** 2026-03-26 05:39:52.500259 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:39:52.500270 | orchestrator | 2026-03-26 05:39:52.500304 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-03-26 05:39:52.500316 | orchestrator | Thursday 26 March 2026 05:39:20 +0000 (0:00:01.115) 0:36:43.749 ******** 2026-03-26 05:39:52.500326 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:39:52.500337 | orchestrator | 2026-03-26 05:39:52.500351 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-03-26 05:39:52.500371 | orchestrator | Thursday 26 March 2026 05:39:21 +0000 (0:00:01.125) 0:36:44.875 ******** 2026-03-26 05:39:52.500389 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:39:52.500401 | orchestrator | 2026-03-26 05:39:52.500412 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-03-26 05:39:52.500422 | orchestrator | Thursday 26 March 2026 05:39:22 +0000 (0:00:01.151) 0:36:46.026 ******** 2026-03-26 05:39:52.500434 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:39:52.500445 | orchestrator | 2026-03-26 05:39:52.500455 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-03-26 05:39:52.500465 | orchestrator | Thursday 26 March 2026 05:39:23 +0000 (0:00:01.174) 0:36:47.200 ******** 2026-03-26 05:39:52.500474 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:39:52.500484 | orchestrator | 2026-03-26 05:39:52.500493 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-03-26 05:39:52.500504 | orchestrator | Thursday 26 March 2026 05:39:24 +0000 (0:00:01.102) 0:36:48.303 ******** 2026-03-26 05:39:52.500513 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:39:52.500522 | orchestrator | 2026-03-26 05:39:52.500532 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-03-26 05:39:52.500542 | orchestrator | Thursday 26 March 2026 05:39:25 +0000 (0:00:01.124) 0:36:49.428 ******** 2026-03-26 05:39:52.500567 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:39:52.500577 | orchestrator | 2026-03-26 05:39:52.500587 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-03-26 05:39:52.500596 | orchestrator | Thursday 26 March 2026 05:39:26 +0000 (0:00:01.159) 0:36:50.587 ******** 2026-03-26 05:39:52.500605 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:39:52.500615 | orchestrator | 2026-03-26 05:39:52.500624 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-03-26 05:39:52.500634 | orchestrator | Thursday 26 March 2026 05:39:28 +0000 (0:00:01.134) 0:36:51.722 ******** 2026-03-26 05:39:52.500643 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:39:52.500652 | orchestrator | 2026-03-26 05:39:52.500662 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-03-26 05:39:52.500671 | orchestrator | Thursday 26 March 2026 05:39:29 +0000 (0:00:01.177) 0:36:52.899 ******** 2026-03-26 05:39:52.500680 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:39:52.500690 | orchestrator | 2026-03-26 05:39:52.500699 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-26 05:39:52.500754 | orchestrator | Thursday 26 March 2026 05:39:30 +0000 (0:00:01.104) 0:36:54.004 ******** 2026-03-26 05:39:52.500765 | orchestrator | ok: [testbed-node-3] 2026-03-26 05:39:52.500774 | orchestrator | 2026-03-26 05:39:52.500783 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-26 05:39:52.500792 | orchestrator | Thursday 26 March 2026 05:39:32 +0000 (0:00:01.971) 0:36:55.975 ******** 2026-03-26 05:39:52.500802 | orchestrator | ok: [testbed-node-3] 2026-03-26 05:39:52.500811 | orchestrator | 2026-03-26 05:39:52.500820 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-26 05:39:52.500837 | orchestrator | Thursday 26 March 2026 05:39:34 +0000 (0:00:02.286) 0:36:58.262 ******** 2026-03-26 05:39:52.500847 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3 2026-03-26 05:39:52.500857 | orchestrator | 2026-03-26 05:39:52.500867 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-26 05:39:52.500876 | orchestrator | Thursday 26 March 2026 05:39:35 +0000 (0:00:01.153) 0:36:59.416 ******** 2026-03-26 05:39:52.500885 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:39:52.500894 | orchestrator | 2026-03-26 05:39:52.500904 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-26 05:39:52.500913 | orchestrator | Thursday 26 March 2026 05:39:36 +0000 (0:00:01.125) 0:37:00.541 ******** 2026-03-26 05:39:52.500922 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:39:52.500932 | orchestrator | 2026-03-26 05:39:52.500941 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-26 05:39:52.500950 | orchestrator | Thursday 26 March 2026 05:39:38 +0000 (0:00:01.155) 0:37:01.697 ******** 2026-03-26 05:39:52.500960 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-26 05:39:52.500969 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-26 05:39:52.500979 | orchestrator | 2026-03-26 05:39:52.500988 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-26 05:39:52.500997 | orchestrator | Thursday 26 March 2026 05:39:39 +0000 (0:00:01.805) 0:37:03.505 ******** 2026-03-26 05:39:52.501007 | orchestrator | ok: [testbed-node-3] 2026-03-26 05:39:52.501016 | orchestrator | 2026-03-26 05:39:52.501025 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-26 05:39:52.501034 | orchestrator | Thursday 26 March 2026 05:39:41 +0000 (0:00:01.514) 0:37:05.019 ******** 2026-03-26 05:39:52.501044 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:39:52.501053 | orchestrator | 2026-03-26 05:39:52.501062 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-26 05:39:52.501072 | orchestrator | Thursday 26 March 2026 05:39:42 +0000 (0:00:01.228) 0:37:06.248 ******** 2026-03-26 05:39:52.501081 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:39:52.501090 | orchestrator | 2026-03-26 05:39:52.501099 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-26 05:39:52.501109 | orchestrator | Thursday 26 March 2026 05:39:43 +0000 (0:00:01.235) 0:37:07.484 ******** 2026-03-26 05:39:52.501118 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:39:52.501127 | orchestrator | 2026-03-26 05:39:52.501137 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-26 05:39:52.501146 | orchestrator | Thursday 26 March 2026 05:39:44 +0000 (0:00:01.158) 0:37:08.643 ******** 2026-03-26 05:39:52.501155 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3 2026-03-26 05:39:52.501165 | orchestrator | 2026-03-26 05:39:52.501174 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-26 05:39:52.501183 | orchestrator | Thursday 26 March 2026 05:39:46 +0000 (0:00:01.111) 0:37:09.755 ******** 2026-03-26 05:39:52.501192 | orchestrator | ok: [testbed-node-3] 2026-03-26 05:39:52.501202 | orchestrator | 2026-03-26 05:39:52.501211 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-26 05:39:52.501220 | orchestrator | Thursday 26 March 2026 05:39:48 +0000 (0:00:02.889) 0:37:12.644 ******** 2026-03-26 05:39:52.501230 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-26 05:39:52.501239 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-26 05:39:52.501248 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-26 05:39:52.501257 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:39:52.501267 | orchestrator | 2026-03-26 05:39:52.501276 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-26 05:39:52.501314 | orchestrator | Thursday 26 March 2026 05:39:50 +0000 (0:00:01.176) 0:37:13.821 ******** 2026-03-26 05:39:52.501324 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:39:52.501334 | orchestrator | 2026-03-26 05:39:52.501343 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-26 05:39:52.501352 | orchestrator | Thursday 26 March 2026 05:39:51 +0000 (0:00:01.179) 0:37:15.001 ******** 2026-03-26 05:39:52.501368 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:40:42.755455 | orchestrator | 2026-03-26 05:40:42.755557 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-26 05:40:42.755569 | orchestrator | Thursday 26 March 2026 05:39:52 +0000 (0:00:01.146) 0:37:16.147 ******** 2026-03-26 05:40:42.755577 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:40:42.755586 | orchestrator | 2026-03-26 05:40:42.755594 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-26 05:40:42.755602 | orchestrator | Thursday 26 March 2026 05:39:53 +0000 (0:00:01.167) 0:37:17.315 ******** 2026-03-26 05:40:42.755610 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:40:42.755617 | orchestrator | 2026-03-26 05:40:42.755624 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-26 05:40:42.755632 | orchestrator | Thursday 26 March 2026 05:39:54 +0000 (0:00:01.178) 0:37:18.493 ******** 2026-03-26 05:40:42.755639 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:40:42.755646 | orchestrator | 2026-03-26 05:40:42.755653 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-26 05:40:42.755661 | orchestrator | Thursday 26 March 2026 05:39:56 +0000 (0:00:01.183) 0:37:19.679 ******** 2026-03-26 05:40:42.755668 | orchestrator | ok: [testbed-node-3] 2026-03-26 05:40:42.755676 | orchestrator | 2026-03-26 05:40:42.755683 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-26 05:40:42.755691 | orchestrator | Thursday 26 March 2026 05:39:58 +0000 (0:00:02.581) 0:37:22.261 ******** 2026-03-26 05:40:42.755699 | orchestrator | ok: [testbed-node-3] 2026-03-26 05:40:42.755706 | orchestrator | 2026-03-26 05:40:42.755713 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-26 05:40:42.755720 | orchestrator | Thursday 26 March 2026 05:39:59 +0000 (0:00:01.108) 0:37:23.369 ******** 2026-03-26 05:40:42.755727 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3 2026-03-26 05:40:42.755734 | orchestrator | 2026-03-26 05:40:42.755742 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-26 05:40:42.755749 | orchestrator | Thursday 26 March 2026 05:40:00 +0000 (0:00:01.098) 0:37:24.467 ******** 2026-03-26 05:40:42.755756 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:40:42.755763 | orchestrator | 2026-03-26 05:40:42.755770 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-26 05:40:42.755777 | orchestrator | Thursday 26 March 2026 05:40:01 +0000 (0:00:01.142) 0:37:25.610 ******** 2026-03-26 05:40:42.755784 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:40:42.755791 | orchestrator | 2026-03-26 05:40:42.755799 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-26 05:40:42.755806 | orchestrator | Thursday 26 March 2026 05:40:03 +0000 (0:00:01.203) 0:37:26.814 ******** 2026-03-26 05:40:42.755813 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:40:42.755820 | orchestrator | 2026-03-26 05:40:42.755827 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-26 05:40:42.755835 | orchestrator | Thursday 26 March 2026 05:40:04 +0000 (0:00:01.138) 0:37:27.952 ******** 2026-03-26 05:40:42.755842 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:40:42.755850 | orchestrator | 2026-03-26 05:40:42.755857 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-26 05:40:42.755864 | orchestrator | Thursday 26 March 2026 05:40:05 +0000 (0:00:01.184) 0:37:29.137 ******** 2026-03-26 05:40:42.755871 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:40:42.755878 | orchestrator | 2026-03-26 05:40:42.755885 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-26 05:40:42.755916 | orchestrator | Thursday 26 March 2026 05:40:06 +0000 (0:00:01.131) 0:37:30.269 ******** 2026-03-26 05:40:42.755923 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:40:42.755931 | orchestrator | 2026-03-26 05:40:42.755938 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-26 05:40:42.755945 | orchestrator | Thursday 26 March 2026 05:40:07 +0000 (0:00:01.130) 0:37:31.400 ******** 2026-03-26 05:40:42.755953 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:40:42.755960 | orchestrator | 2026-03-26 05:40:42.755967 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-26 05:40:42.755975 | orchestrator | Thursday 26 March 2026 05:40:08 +0000 (0:00:01.231) 0:37:32.632 ******** 2026-03-26 05:40:42.755982 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:40:42.755989 | orchestrator | 2026-03-26 05:40:42.755996 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-26 05:40:42.756004 | orchestrator | Thursday 26 March 2026 05:40:10 +0000 (0:00:01.102) 0:37:33.734 ******** 2026-03-26 05:40:42.756012 | orchestrator | ok: [testbed-node-3] 2026-03-26 05:40:42.756020 | orchestrator | 2026-03-26 05:40:42.756028 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-26 05:40:42.756036 | orchestrator | Thursday 26 March 2026 05:40:11 +0000 (0:00:01.175) 0:37:34.909 ******** 2026-03-26 05:40:42.756045 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3 2026-03-26 05:40:42.756054 | orchestrator | 2026-03-26 05:40:42.756062 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-26 05:40:42.756070 | orchestrator | Thursday 26 March 2026 05:40:12 +0000 (0:00:01.146) 0:37:36.056 ******** 2026-03-26 05:40:42.756078 | orchestrator | ok: [testbed-node-3] => (item=/etc/ceph) 2026-03-26 05:40:42.756087 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/) 2026-03-26 05:40:42.756096 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-03-26 05:40:42.756104 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-03-26 05:40:42.756112 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-03-26 05:40:42.756133 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-03-26 05:40:42.756141 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-03-26 05:40:42.756150 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-03-26 05:40:42.756158 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-26 05:40:42.756180 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-26 05:40:42.756189 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-26 05:40:42.756197 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-26 05:40:42.756226 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-26 05:40:42.756235 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-26 05:40:42.756243 | orchestrator | ok: [testbed-node-3] => (item=/var/run/ceph) 2026-03-26 05:40:42.756251 | orchestrator | ok: [testbed-node-3] => (item=/var/log/ceph) 2026-03-26 05:40:42.756259 | orchestrator | 2026-03-26 05:40:42.756267 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-26 05:40:42.756275 | orchestrator | Thursday 26 March 2026 05:40:19 +0000 (0:00:06.674) 0:37:42.731 ******** 2026-03-26 05:40:42.756283 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3 2026-03-26 05:40:42.756291 | orchestrator | 2026-03-26 05:40:42.756299 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-03-26 05:40:42.756307 | orchestrator | Thursday 26 March 2026 05:40:20 +0000 (0:00:01.506) 0:37:44.238 ******** 2026-03-26 05:40:42.756315 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-26 05:40:42.756331 | orchestrator | 2026-03-26 05:40:42.756339 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-03-26 05:40:42.756347 | orchestrator | Thursday 26 March 2026 05:40:22 +0000 (0:00:01.533) 0:37:45.772 ******** 2026-03-26 05:40:42.756356 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-26 05:40:42.756364 | orchestrator | 2026-03-26 05:40:42.756371 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-26 05:40:42.756378 | orchestrator | Thursday 26 March 2026 05:40:24 +0000 (0:00:02.002) 0:37:47.774 ******** 2026-03-26 05:40:42.756386 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:40:42.756393 | orchestrator | 2026-03-26 05:40:42.756400 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-26 05:40:42.756407 | orchestrator | Thursday 26 March 2026 05:40:25 +0000 (0:00:01.145) 0:37:48.920 ******** 2026-03-26 05:40:42.756414 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:40:42.756421 | orchestrator | 2026-03-26 05:40:42.756428 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-26 05:40:42.756435 | orchestrator | Thursday 26 March 2026 05:40:26 +0000 (0:00:01.112) 0:37:50.032 ******** 2026-03-26 05:40:42.756442 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:40:42.756449 | orchestrator | 2026-03-26 05:40:42.756456 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-26 05:40:42.756463 | orchestrator | Thursday 26 March 2026 05:40:27 +0000 (0:00:01.206) 0:37:51.239 ******** 2026-03-26 05:40:42.756471 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:40:42.756478 | orchestrator | 2026-03-26 05:40:42.756485 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-26 05:40:42.756492 | orchestrator | Thursday 26 March 2026 05:40:28 +0000 (0:00:01.179) 0:37:52.418 ******** 2026-03-26 05:40:42.756499 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:40:42.756506 | orchestrator | 2026-03-26 05:40:42.756513 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-26 05:40:42.756520 | orchestrator | Thursday 26 March 2026 05:40:29 +0000 (0:00:01.159) 0:37:53.578 ******** 2026-03-26 05:40:42.756527 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:40:42.756534 | orchestrator | 2026-03-26 05:40:42.756541 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-26 05:40:42.756548 | orchestrator | Thursday 26 March 2026 05:40:31 +0000 (0:00:01.197) 0:37:54.776 ******** 2026-03-26 05:40:42.756555 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:40:42.756562 | orchestrator | 2026-03-26 05:40:42.756570 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-26 05:40:42.756577 | orchestrator | Thursday 26 March 2026 05:40:32 +0000 (0:00:01.126) 0:37:55.902 ******** 2026-03-26 05:40:42.756584 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:40:42.756591 | orchestrator | 2026-03-26 05:40:42.756598 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-26 05:40:42.756605 | orchestrator | Thursday 26 March 2026 05:40:33 +0000 (0:00:01.176) 0:37:57.079 ******** 2026-03-26 05:40:42.756612 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:40:42.756619 | orchestrator | 2026-03-26 05:40:42.756626 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-26 05:40:42.756633 | orchestrator | Thursday 26 March 2026 05:40:34 +0000 (0:00:01.154) 0:37:58.233 ******** 2026-03-26 05:40:42.756640 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:40:42.756647 | orchestrator | 2026-03-26 05:40:42.756654 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-26 05:40:42.756661 | orchestrator | Thursday 26 March 2026 05:40:35 +0000 (0:00:01.136) 0:37:59.370 ******** 2026-03-26 05:40:42.756668 | orchestrator | ok: [testbed-node-3] 2026-03-26 05:40:42.756675 | orchestrator | 2026-03-26 05:40:42.756682 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-26 05:40:42.756694 | orchestrator | Thursday 26 March 2026 05:40:36 +0000 (0:00:01.263) 0:38:00.634 ******** 2026-03-26 05:40:42.756701 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-03-26 05:40:42.756708 | orchestrator | 2026-03-26 05:40:42.756719 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-26 05:40:42.756726 | orchestrator | Thursday 26 March 2026 05:40:41 +0000 (0:00:04.522) 0:38:05.157 ******** 2026-03-26 05:40:42.756739 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-26 05:41:32.455715 | orchestrator | 2026-03-26 05:41:32.455833 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-26 05:41:32.455851 | orchestrator | Thursday 26 March 2026 05:40:42 +0000 (0:00:01.245) 0:38:06.402 ******** 2026-03-26 05:41:32.455866 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}]) 2026-03-26 05:41:32.455882 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}]) 2026-03-26 05:41:32.455895 | orchestrator | 2026-03-26 05:41:32.455906 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-26 05:41:32.455917 | orchestrator | Thursday 26 March 2026 05:40:50 +0000 (0:00:08.009) 0:38:14.412 ******** 2026-03-26 05:41:32.455929 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:41:32.455940 | orchestrator | 2026-03-26 05:41:32.455951 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-26 05:41:32.455962 | orchestrator | Thursday 26 March 2026 05:40:51 +0000 (0:00:01.139) 0:38:15.552 ******** 2026-03-26 05:41:32.455973 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:41:32.455984 | orchestrator | 2026-03-26 05:41:32.455995 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-26 05:41:32.456008 | orchestrator | Thursday 26 March 2026 05:40:53 +0000 (0:00:01.147) 0:38:16.700 ******** 2026-03-26 05:41:32.456019 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:41:32.456030 | orchestrator | 2026-03-26 05:41:32.456041 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-26 05:41:32.456051 | orchestrator | Thursday 26 March 2026 05:40:54 +0000 (0:00:01.178) 0:38:17.879 ******** 2026-03-26 05:41:32.456062 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:41:32.456073 | orchestrator | 2026-03-26 05:41:32.456084 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-26 05:41:32.456095 | orchestrator | Thursday 26 March 2026 05:40:55 +0000 (0:00:01.214) 0:38:19.093 ******** 2026-03-26 05:41:32.456105 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:41:32.456116 | orchestrator | 2026-03-26 05:41:32.456127 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-26 05:41:32.456138 | orchestrator | Thursday 26 March 2026 05:40:56 +0000 (0:00:01.137) 0:38:20.231 ******** 2026-03-26 05:41:32.456212 | orchestrator | ok: [testbed-node-3] 2026-03-26 05:41:32.456225 | orchestrator | 2026-03-26 05:41:32.456236 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-26 05:41:32.456247 | orchestrator | Thursday 26 March 2026 05:40:57 +0000 (0:00:01.307) 0:38:21.539 ******** 2026-03-26 05:41:32.456260 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-26 05:41:32.456273 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-26 05:41:32.456285 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-26 05:41:32.456321 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:41:32.456334 | orchestrator | 2026-03-26 05:41:32.456347 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-26 05:41:32.456358 | orchestrator | Thursday 26 March 2026 05:40:59 +0000 (0:00:01.448) 0:38:22.988 ******** 2026-03-26 05:41:32.456369 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-26 05:41:32.456379 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-26 05:41:32.456390 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-26 05:41:32.456401 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:41:32.456412 | orchestrator | 2026-03-26 05:41:32.456423 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-26 05:41:32.456433 | orchestrator | Thursday 26 March 2026 05:41:00 +0000 (0:00:01.419) 0:38:24.408 ******** 2026-03-26 05:41:32.456444 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-26 05:41:32.456455 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-26 05:41:32.456465 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-26 05:41:32.456476 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:41:32.456487 | orchestrator | 2026-03-26 05:41:32.456497 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-26 05:41:32.456508 | orchestrator | Thursday 26 March 2026 05:41:02 +0000 (0:00:01.516) 0:38:25.924 ******** 2026-03-26 05:41:32.456519 | orchestrator | ok: [testbed-node-3] 2026-03-26 05:41:32.456529 | orchestrator | 2026-03-26 05:41:32.456540 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-26 05:41:32.456551 | orchestrator | Thursday 26 March 2026 05:41:03 +0000 (0:00:01.181) 0:38:27.105 ******** 2026-03-26 05:41:32.456562 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-26 05:41:32.456572 | orchestrator | 2026-03-26 05:41:32.456583 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-26 05:41:32.456609 | orchestrator | Thursday 26 March 2026 05:41:05 +0000 (0:00:01.916) 0:38:29.022 ******** 2026-03-26 05:41:32.456620 | orchestrator | changed: [testbed-node-3] 2026-03-26 05:41:32.456631 | orchestrator | 2026-03-26 05:41:32.456642 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-03-26 05:41:32.456653 | orchestrator | Thursday 26 March 2026 05:41:07 +0000 (0:00:01.719) 0:38:30.741 ******** 2026-03-26 05:41:32.456664 | orchestrator | ok: [testbed-node-3] 2026-03-26 05:41:32.456675 | orchestrator | 2026-03-26 05:41:32.456703 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-03-26 05:41:32.456715 | orchestrator | Thursday 26 March 2026 05:41:08 +0000 (0:00:01.130) 0:38:31.871 ******** 2026-03-26 05:41:32.456726 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-26 05:41:32.456738 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-26 05:41:32.456749 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-26 05:41:32.456759 | orchestrator | 2026-03-26 05:41:32.456770 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-03-26 05:41:32.456781 | orchestrator | Thursday 26 March 2026 05:41:09 +0000 (0:00:01.668) 0:38:33.540 ******** 2026-03-26 05:41:32.456791 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3 2026-03-26 05:41:32.456802 | orchestrator | 2026-03-26 05:41:32.456813 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-03-26 05:41:32.456824 | orchestrator | Thursday 26 March 2026 05:41:11 +0000 (0:00:01.485) 0:38:35.026 ******** 2026-03-26 05:41:32.456834 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:41:32.456845 | orchestrator | 2026-03-26 05:41:32.456856 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-03-26 05:41:32.456866 | orchestrator | Thursday 26 March 2026 05:41:12 +0000 (0:00:01.114) 0:38:36.141 ******** 2026-03-26 05:41:32.456887 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:41:32.456898 | orchestrator | 2026-03-26 05:41:32.456909 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-03-26 05:41:32.456920 | orchestrator | Thursday 26 March 2026 05:41:13 +0000 (0:00:01.116) 0:38:37.258 ******** 2026-03-26 05:41:32.456930 | orchestrator | ok: [testbed-node-3] 2026-03-26 05:41:32.456941 | orchestrator | 2026-03-26 05:41:32.456952 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-03-26 05:41:32.456963 | orchestrator | Thursday 26 March 2026 05:41:15 +0000 (0:00:01.434) 0:38:38.692 ******** 2026-03-26 05:41:32.456973 | orchestrator | ok: [testbed-node-3] 2026-03-26 05:41:32.456984 | orchestrator | 2026-03-26 05:41:32.456995 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-03-26 05:41:32.457005 | orchestrator | Thursday 26 March 2026 05:41:16 +0000 (0:00:01.128) 0:38:39.820 ******** 2026-03-26 05:41:32.457016 | orchestrator | ok: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-26 05:41:32.457027 | orchestrator | ok: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-26 05:41:32.457038 | orchestrator | ok: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-26 05:41:32.457049 | orchestrator | ok: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-26 05:41:32.457059 | orchestrator | ok: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-26 05:41:32.457070 | orchestrator | 2026-03-26 05:41:32.457081 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-03-26 05:41:32.457091 | orchestrator | Thursday 26 March 2026 05:41:19 +0000 (0:00:02.994) 0:38:42.815 ******** 2026-03-26 05:41:32.457102 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:41:32.457113 | orchestrator | 2026-03-26 05:41:32.457123 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-03-26 05:41:32.457134 | orchestrator | Thursday 26 March 2026 05:41:20 +0000 (0:00:01.106) 0:38:43.922 ******** 2026-03-26 05:41:32.457164 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3 2026-03-26 05:41:32.457176 | orchestrator | 2026-03-26 05:41:32.457187 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-03-26 05:41:32.457197 | orchestrator | Thursday 26 March 2026 05:41:21 +0000 (0:00:01.566) 0:38:45.489 ******** 2026-03-26 05:41:32.457208 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-26 05:41:32.457219 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-03-26 05:41:32.457230 | orchestrator | 2026-03-26 05:41:32.457241 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-03-26 05:41:32.457252 | orchestrator | Thursday 26 March 2026 05:41:23 +0000 (0:00:01.826) 0:38:47.315 ******** 2026-03-26 05:41:32.457262 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-26 05:41:32.457273 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-26 05:41:32.457284 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-26 05:41:32.457294 | orchestrator | 2026-03-26 05:41:32.457305 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-03-26 05:41:32.457316 | orchestrator | Thursday 26 March 2026 05:41:26 +0000 (0:00:03.222) 0:38:50.538 ******** 2026-03-26 05:41:32.457327 | orchestrator | ok: [testbed-node-3] => (item=None) 2026-03-26 05:41:32.457338 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-26 05:41:32.457349 | orchestrator | ok: [testbed-node-3] 2026-03-26 05:41:32.457359 | orchestrator | 2026-03-26 05:41:32.457370 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-03-26 05:41:32.457381 | orchestrator | Thursday 26 March 2026 05:41:28 +0000 (0:00:01.983) 0:38:52.522 ******** 2026-03-26 05:41:32.457392 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:41:32.457402 | orchestrator | 2026-03-26 05:41:32.457413 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-03-26 05:41:32.457437 | orchestrator | Thursday 26 March 2026 05:41:30 +0000 (0:00:01.247) 0:38:53.769 ******** 2026-03-26 05:41:32.457448 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:41:32.457459 | orchestrator | 2026-03-26 05:41:32.457470 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-03-26 05:41:32.457481 | orchestrator | Thursday 26 March 2026 05:41:31 +0000 (0:00:01.140) 0:38:54.910 ******** 2026-03-26 05:41:32.457491 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:41:32.457502 | orchestrator | 2026-03-26 05:41:32.457519 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-03-26 05:42:40.959419 | orchestrator | Thursday 26 March 2026 05:41:32 +0000 (0:00:01.191) 0:38:56.101 ******** 2026-03-26 05:42:40.959537 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3 2026-03-26 05:42:40.959555 | orchestrator | 2026-03-26 05:42:40.959568 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-03-26 05:42:40.959580 | orchestrator | Thursday 26 March 2026 05:41:33 +0000 (0:00:01.504) 0:38:57.606 ******** 2026-03-26 05:42:40.959592 | orchestrator | ok: [testbed-node-3] 2026-03-26 05:42:40.959604 | orchestrator | 2026-03-26 05:42:40.959615 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-03-26 05:42:40.959626 | orchestrator | Thursday 26 March 2026 05:41:35 +0000 (0:00:01.458) 0:38:59.065 ******** 2026-03-26 05:42:40.959636 | orchestrator | ok: [testbed-node-3] 2026-03-26 05:42:40.959647 | orchestrator | 2026-03-26 05:42:40.959658 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-03-26 05:42:40.959669 | orchestrator | Thursday 26 March 2026 05:41:38 +0000 (0:00:03.593) 0:39:02.659 ******** 2026-03-26 05:42:40.959680 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3 2026-03-26 05:42:40.959690 | orchestrator | 2026-03-26 05:42:40.959701 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-03-26 05:42:40.959712 | orchestrator | Thursday 26 March 2026 05:41:40 +0000 (0:00:01.484) 0:39:04.143 ******** 2026-03-26 05:42:40.959722 | orchestrator | ok: [testbed-node-3] 2026-03-26 05:42:40.959733 | orchestrator | 2026-03-26 05:42:40.959744 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-03-26 05:42:40.959754 | orchestrator | Thursday 26 March 2026 05:41:42 +0000 (0:00:01.987) 0:39:06.130 ******** 2026-03-26 05:42:40.959765 | orchestrator | ok: [testbed-node-3] 2026-03-26 05:42:40.959776 | orchestrator | 2026-03-26 05:42:40.959786 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-03-26 05:42:40.959797 | orchestrator | Thursday 26 March 2026 05:41:44 +0000 (0:00:01.964) 0:39:08.095 ******** 2026-03-26 05:42:40.959808 | orchestrator | ok: [testbed-node-3] 2026-03-26 05:42:40.959819 | orchestrator | 2026-03-26 05:42:40.959829 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-03-26 05:42:40.959840 | orchestrator | Thursday 26 March 2026 05:41:46 +0000 (0:00:02.264) 0:39:10.359 ******** 2026-03-26 05:42:40.959851 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:42:40.959862 | orchestrator | 2026-03-26 05:42:40.959873 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-03-26 05:42:40.959884 | orchestrator | Thursday 26 March 2026 05:41:47 +0000 (0:00:01.213) 0:39:11.573 ******** 2026-03-26 05:42:40.959894 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:42:40.959905 | orchestrator | 2026-03-26 05:42:40.959916 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-03-26 05:42:40.959926 | orchestrator | Thursday 26 March 2026 05:41:49 +0000 (0:00:01.120) 0:39:12.694 ******** 2026-03-26 05:42:40.959937 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-26 05:42:40.959948 | orchestrator | ok: [testbed-node-3] => (item=3) 2026-03-26 05:42:40.959958 | orchestrator | 2026-03-26 05:42:40.959969 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-03-26 05:42:40.959980 | orchestrator | Thursday 26 March 2026 05:41:50 +0000 (0:00:01.870) 0:39:14.564 ******** 2026-03-26 05:42:40.960017 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-26 05:42:40.960028 | orchestrator | ok: [testbed-node-3] => (item=3) 2026-03-26 05:42:40.960039 | orchestrator | 2026-03-26 05:42:40.960050 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-03-26 05:42:40.960060 | orchestrator | Thursday 26 March 2026 05:41:53 +0000 (0:00:02.955) 0:39:17.520 ******** 2026-03-26 05:42:40.960093 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-03-26 05:42:40.960104 | orchestrator | changed: [testbed-node-3] => (item=3) 2026-03-26 05:42:40.960114 | orchestrator | 2026-03-26 05:42:40.960125 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-03-26 05:42:40.960135 | orchestrator | Thursday 26 March 2026 05:41:58 +0000 (0:00:04.710) 0:39:22.230 ******** 2026-03-26 05:42:40.960146 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:42:40.960156 | orchestrator | 2026-03-26 05:42:40.960167 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-03-26 05:42:40.960177 | orchestrator | Thursday 26 March 2026 05:41:59 +0000 (0:00:01.260) 0:39:23.491 ******** 2026-03-26 05:42:40.960188 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:42:40.960199 | orchestrator | 2026-03-26 05:42:40.960209 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-03-26 05:42:40.960220 | orchestrator | Thursday 26 March 2026 05:42:01 +0000 (0:00:01.234) 0:39:24.726 ******** 2026-03-26 05:42:40.960230 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:42:40.960241 | orchestrator | 2026-03-26 05:42:40.960251 | orchestrator | TASK [Scan ceph-disk osds with ceph-volume if deploying nautilus] ************** 2026-03-26 05:42:40.960262 | orchestrator | Thursday 26 March 2026 05:42:02 +0000 (0:00:01.270) 0:39:25.996 ******** 2026-03-26 05:42:40.960272 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:42:40.960282 | orchestrator | 2026-03-26 05:42:40.960293 | orchestrator | TASK [Activate scanned ceph-disk osds and migrate to ceph-volume if deploying nautilus] *** 2026-03-26 05:42:40.960304 | orchestrator | Thursday 26 March 2026 05:42:03 +0000 (0:00:01.154) 0:39:27.150 ******** 2026-03-26 05:42:40.960314 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:42:40.960325 | orchestrator | 2026-03-26 05:42:40.960335 | orchestrator | TASK [Waiting for clean pgs...] ************************************************ 2026-03-26 05:42:40.960345 | orchestrator | Thursday 26 March 2026 05:42:04 +0000 (0:00:01.204) 0:39:28.355 ******** 2026-03-26 05:42:40.960371 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for clean pgs... (600 retries left). 2026-03-26 05:42:40.960383 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for clean pgs... (599 retries left). 2026-03-26 05:42:40.960394 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for clean pgs... (598 retries left). 2026-03-26 05:42:40.960432 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for clean pgs... (597 retries left). 2026-03-26 05:42:40.960453 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-26 05:42:40.960472 | orchestrator | 2026-03-26 05:42:40.960491 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-26 05:42:40.960511 | orchestrator | Thursday 26 March 2026 05:42:18 +0000 (0:00:14.164) 0:39:42.520 ******** 2026-03-26 05:42:40.960528 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:42:40.960545 | orchestrator | 2026-03-26 05:42:40.960562 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-03-26 05:42:40.960580 | orchestrator | Thursday 26 March 2026 05:42:19 +0000 (0:00:01.121) 0:39:43.641 ******** 2026-03-26 05:42:40.960600 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:42:40.960621 | orchestrator | 2026-03-26 05:42:40.960640 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-03-26 05:42:40.960660 | orchestrator | Thursday 26 March 2026 05:42:21 +0000 (0:00:01.130) 0:39:44.772 ******** 2026-03-26 05:42:40.960680 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:42:40.960701 | orchestrator | 2026-03-26 05:42:40.960722 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-03-26 05:42:40.960754 | orchestrator | Thursday 26 March 2026 05:42:22 +0000 (0:00:01.156) 0:39:45.928 ******** 2026-03-26 05:42:40.960766 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:42:40.960777 | orchestrator | 2026-03-26 05:42:40.960788 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-03-26 05:42:40.960799 | orchestrator | Thursday 26 March 2026 05:42:23 +0000 (0:00:01.144) 0:39:47.073 ******** 2026-03-26 05:42:40.960809 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:42:40.960820 | orchestrator | 2026-03-26 05:42:40.960831 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-03-26 05:42:40.960841 | orchestrator | Thursday 26 March 2026 05:42:24 +0000 (0:00:01.141) 0:39:48.215 ******** 2026-03-26 05:42:40.960852 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:42:40.960862 | orchestrator | 2026-03-26 05:42:40.960873 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-03-26 05:42:40.960883 | orchestrator | Thursday 26 March 2026 05:42:25 +0000 (0:00:01.107) 0:39:49.322 ******** 2026-03-26 05:42:40.960894 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:42:40.960904 | orchestrator | 2026-03-26 05:42:40.960915 | orchestrator | PLAY [Upgrade ceph osds cluster] *********************************************** 2026-03-26 05:42:40.960925 | orchestrator | 2026-03-26 05:42:40.960936 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-26 05:42:40.960946 | orchestrator | Thursday 26 March 2026 05:42:26 +0000 (0:00:00.953) 0:39:50.276 ******** 2026-03-26 05:42:40.960957 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-4 2026-03-26 05:42:40.960967 | orchestrator | 2026-03-26 05:42:40.960978 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-26 05:42:40.960989 | orchestrator | Thursday 26 March 2026 05:42:27 +0000 (0:00:01.167) 0:39:51.444 ******** 2026-03-26 05:42:40.960999 | orchestrator | ok: [testbed-node-4] 2026-03-26 05:42:40.961010 | orchestrator | 2026-03-26 05:42:40.961021 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-26 05:42:40.961032 | orchestrator | Thursday 26 March 2026 05:42:29 +0000 (0:00:01.481) 0:39:52.925 ******** 2026-03-26 05:42:40.961042 | orchestrator | ok: [testbed-node-4] 2026-03-26 05:42:40.961053 | orchestrator | 2026-03-26 05:42:40.961116 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-26 05:42:40.961130 | orchestrator | Thursday 26 March 2026 05:42:30 +0000 (0:00:01.166) 0:39:54.092 ******** 2026-03-26 05:42:40.961141 | orchestrator | ok: [testbed-node-4] 2026-03-26 05:42:40.961151 | orchestrator | 2026-03-26 05:42:40.961162 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-26 05:42:40.961172 | orchestrator | Thursday 26 March 2026 05:42:31 +0000 (0:00:01.452) 0:39:55.545 ******** 2026-03-26 05:42:40.961183 | orchestrator | ok: [testbed-node-4] 2026-03-26 05:42:40.961194 | orchestrator | 2026-03-26 05:42:40.961204 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-26 05:42:40.961215 | orchestrator | Thursday 26 March 2026 05:42:33 +0000 (0:00:01.180) 0:39:56.726 ******** 2026-03-26 05:42:40.961225 | orchestrator | ok: [testbed-node-4] 2026-03-26 05:42:40.961236 | orchestrator | 2026-03-26 05:42:40.961246 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-26 05:42:40.961257 | orchestrator | Thursday 26 March 2026 05:42:34 +0000 (0:00:01.124) 0:39:57.850 ******** 2026-03-26 05:42:40.961267 | orchestrator | ok: [testbed-node-4] 2026-03-26 05:42:40.961278 | orchestrator | 2026-03-26 05:42:40.961289 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-26 05:42:40.961300 | orchestrator | Thursday 26 March 2026 05:42:35 +0000 (0:00:01.144) 0:39:58.995 ******** 2026-03-26 05:42:40.961310 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:42:40.961321 | orchestrator | 2026-03-26 05:42:40.961331 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-26 05:42:40.961342 | orchestrator | Thursday 26 March 2026 05:42:36 +0000 (0:00:01.137) 0:40:00.132 ******** 2026-03-26 05:42:40.961360 | orchestrator | ok: [testbed-node-4] 2026-03-26 05:42:40.961371 | orchestrator | 2026-03-26 05:42:40.961382 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-26 05:42:40.961392 | orchestrator | Thursday 26 March 2026 05:42:37 +0000 (0:00:01.117) 0:40:01.250 ******** 2026-03-26 05:42:40.961403 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-26 05:42:40.961421 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-26 05:42:40.961432 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-26 05:42:40.961442 | orchestrator | 2026-03-26 05:42:40.961453 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-26 05:42:40.961464 | orchestrator | Thursday 26 March 2026 05:42:39 +0000 (0:00:02.101) 0:40:03.352 ******** 2026-03-26 05:42:40.961474 | orchestrator | ok: [testbed-node-4] 2026-03-26 05:42:40.961485 | orchestrator | 2026-03-26 05:42:40.961506 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-26 05:43:06.661446 | orchestrator | Thursday 26 March 2026 05:42:40 +0000 (0:00:01.253) 0:40:04.605 ******** 2026-03-26 05:43:06.661554 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-26 05:43:06.661570 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-26 05:43:06.661583 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-26 05:43:06.661594 | orchestrator | 2026-03-26 05:43:06.661606 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-26 05:43:06.661617 | orchestrator | Thursday 26 March 2026 05:42:44 +0000 (0:00:03.297) 0:40:07.903 ******** 2026-03-26 05:43:06.661629 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-26 05:43:06.661640 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-26 05:43:06.661651 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-26 05:43:06.661662 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:43:06.661673 | orchestrator | 2026-03-26 05:43:06.661684 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-26 05:43:06.661695 | orchestrator | Thursday 26 March 2026 05:42:46 +0000 (0:00:01.828) 0:40:09.732 ******** 2026-03-26 05:43:06.661707 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-26 05:43:06.661721 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-26 05:43:06.661732 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-26 05:43:06.661744 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:43:06.661764 | orchestrator | 2026-03-26 05:43:06.661791 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-26 05:43:06.661814 | orchestrator | Thursday 26 March 2026 05:42:48 +0000 (0:00:01.963) 0:40:11.695 ******** 2026-03-26 05:43:06.661833 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-26 05:43:06.661854 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-26 05:43:06.661906 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-26 05:43:06.661926 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:43:06.661988 | orchestrator | 2026-03-26 05:43:06.662011 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-26 05:43:06.662145 | orchestrator | Thursday 26 March 2026 05:42:49 +0000 (0:00:01.239) 0:40:12.935 ******** 2026-03-26 05:43:06.662194 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': 'de9c3b4c4c57', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-26 05:42:41.492746', 'end': '2026-03-26 05:42:41.541679', 'delta': '0:00:00.048933', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['de9c3b4c4c57'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-26 05:43:06.662230 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': 'd66b87272f8e', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-26 05:42:42.438597', 'end': '2026-03-26 05:42:42.480128', 'delta': '0:00:00.041531', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['d66b87272f8e'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-26 05:43:06.662244 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': 'b850f8fd4697', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-26 05:42:43.005995', 'end': '2026-03-26 05:42:43.056235', 'delta': '0:00:00.050240', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['b850f8fd4697'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-26 05:43:06.662255 | orchestrator | 2026-03-26 05:43:06.662266 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-26 05:43:06.662277 | orchestrator | Thursday 26 March 2026 05:42:50 +0000 (0:00:01.178) 0:40:14.113 ******** 2026-03-26 05:43:06.662288 | orchestrator | ok: [testbed-node-4] 2026-03-26 05:43:06.662299 | orchestrator | 2026-03-26 05:43:06.662310 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-26 05:43:06.662321 | orchestrator | Thursday 26 March 2026 05:42:51 +0000 (0:00:01.281) 0:40:15.394 ******** 2026-03-26 05:43:06.662344 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:43:06.662355 | orchestrator | 2026-03-26 05:43:06.662366 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-26 05:43:06.662377 | orchestrator | Thursday 26 March 2026 05:42:53 +0000 (0:00:01.272) 0:40:16.667 ******** 2026-03-26 05:43:06.662388 | orchestrator | ok: [testbed-node-4] 2026-03-26 05:43:06.662399 | orchestrator | 2026-03-26 05:43:06.662409 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-26 05:43:06.662420 | orchestrator | Thursday 26 March 2026 05:42:54 +0000 (0:00:01.164) 0:40:17.831 ******** 2026-03-26 05:43:06.662431 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-03-26 05:43:06.662442 | orchestrator | 2026-03-26 05:43:06.662452 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-26 05:43:06.662463 | orchestrator | Thursday 26 March 2026 05:42:56 +0000 (0:00:01.961) 0:40:19.793 ******** 2026-03-26 05:43:06.662474 | orchestrator | ok: [testbed-node-4] 2026-03-26 05:43:06.662485 | orchestrator | 2026-03-26 05:43:06.662495 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-26 05:43:06.662506 | orchestrator | Thursday 26 March 2026 05:42:57 +0000 (0:00:01.116) 0:40:20.910 ******** 2026-03-26 05:43:06.662530 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:43:06.662541 | orchestrator | 2026-03-26 05:43:06.662552 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-26 05:43:06.662563 | orchestrator | Thursday 26 March 2026 05:42:58 +0000 (0:00:01.143) 0:40:22.053 ******** 2026-03-26 05:43:06.662574 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:43:06.662585 | orchestrator | 2026-03-26 05:43:06.662596 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-26 05:43:06.662607 | orchestrator | Thursday 26 March 2026 05:42:59 +0000 (0:00:01.241) 0:40:23.294 ******** 2026-03-26 05:43:06.662617 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:43:06.662628 | orchestrator | 2026-03-26 05:43:06.662639 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-26 05:43:06.662650 | orchestrator | Thursday 26 March 2026 05:43:00 +0000 (0:00:01.131) 0:40:24.426 ******** 2026-03-26 05:43:06.662660 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:43:06.662671 | orchestrator | 2026-03-26 05:43:06.662682 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-26 05:43:06.662693 | orchestrator | Thursday 26 March 2026 05:43:01 +0000 (0:00:01.156) 0:40:25.583 ******** 2026-03-26 05:43:06.662704 | orchestrator | ok: [testbed-node-4] 2026-03-26 05:43:06.662714 | orchestrator | 2026-03-26 05:43:06.662725 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-26 05:43:06.662736 | orchestrator | Thursday 26 March 2026 05:43:03 +0000 (0:00:01.167) 0:40:26.751 ******** 2026-03-26 05:43:06.662752 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:43:06.662763 | orchestrator | 2026-03-26 05:43:06.662774 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-26 05:43:06.662784 | orchestrator | Thursday 26 March 2026 05:43:04 +0000 (0:00:01.113) 0:40:27.865 ******** 2026-03-26 05:43:06.662795 | orchestrator | ok: [testbed-node-4] 2026-03-26 05:43:06.662806 | orchestrator | 2026-03-26 05:43:06.662817 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-26 05:43:06.662827 | orchestrator | Thursday 26 March 2026 05:43:05 +0000 (0:00:01.324) 0:40:29.190 ******** 2026-03-26 05:43:06.662838 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:43:06.662849 | orchestrator | 2026-03-26 05:43:08.061230 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-26 05:43:08.061325 | orchestrator | Thursday 26 March 2026 05:43:06 +0000 (0:00:01.118) 0:40:30.308 ******** 2026-03-26 05:43:08.061340 | orchestrator | ok: [testbed-node-4] 2026-03-26 05:43:08.061351 | orchestrator | 2026-03-26 05:43:08.061362 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-26 05:43:08.061372 | orchestrator | Thursday 26 March 2026 05:43:07 +0000 (0:00:01.123) 0:40:31.431 ******** 2026-03-26 05:43:08.061407 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:43:08.061423 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--b5eee7c3--8883--5bbe--be5a--75726e822543-osd--block--b5eee7c3--8883--5bbe--be5a--75726e822543', 'dm-uuid-LVM-O1aEkSX5V2TgXKGnqX2peNd9dQhi04NAZJyEqlgfRLjtJKN8JwRgDI1ZPO4R3wgt'], 'uuids': ['1d39f6c5-1f6c-4630-99cd-a410ca5e45d8'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'a52ec37c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['ZJyEql-gfRL-jtJK-N8Jw-RgDI-1ZPO-4R3wgt']}})  2026-03-26 05:43:08.061437 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7e352b46-e023-45cf-8a88-51cc46240a44', 'scsi-SQEMU_QEMU_HARDDISK_7e352b46-e023-45cf-8a88-51cc46240a44'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '7e352b46', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-26 05:43:08.061449 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-eoBjP8-dDdJ-3FQm-pH7P-5B72-c1L3-mABWfX', 'scsi-0QEMU_QEMU_HARDDISK_7db5f133-fe7b-42a4-ad57-b076dc1856ab', 'scsi-SQEMU_QEMU_HARDDISK_7db5f133-fe7b-42a4-ad57-b076dc1856ab'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '7db5f133', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--a652979e--9f40--503a--bbc8--6de5e605991e-osd--block--a652979e--9f40--503a--bbc8--6de5e605991e']}})  2026-03-26 05:43:08.061460 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:43:08.061486 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:43:08.061582 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-26-01-38-09-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-26 05:43:08.061606 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:43:08.061617 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-2h2U7S-LeR6-PGwr-u1xY-81U9-rrCs-8siESG', 'dm-uuid-CRYPT-LUKS2-741ece0a80b8415aa2e2dcc695db5f53-2h2U7S-LeR6-PGwr-u1xY-81U9-rrCs-8siESG'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-26 05:43:08.061627 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:43:08.061638 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--a652979e--9f40--503a--bbc8--6de5e605991e-osd--block--a652979e--9f40--503a--bbc8--6de5e605991e', 'dm-uuid-LVM-86WEu6duX2Pejl3asW6viK3fsh4aqvqg2h2U7SLeR6PGwru1xY81U9rrCs8siESG'], 'uuids': ['741ece0a-80b8-415a-a2e2-dcc695db5f53'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '7db5f133', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['2h2U7S-LeR6-PGwr-u1xY-81U9-rrCs-8siESG']}})  2026-03-26 05:43:08.061649 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-Oy69b4-OcVV-F2KD-vi5G-C8ns-n3Cu-1PhYTB', 'scsi-0QEMU_QEMU_HARDDISK_a52ec37c-b4ea-4f83-9b16-3c0f6ce85263', 'scsi-SQEMU_QEMU_HARDDISK_a52ec37c-b4ea-4f83-9b16-3c0f6ce85263'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'a52ec37c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--b5eee7c3--8883--5bbe--be5a--75726e822543-osd--block--b5eee7c3--8883--5bbe--be5a--75726e822543']}})  2026-03-26 05:43:08.061659 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:43:08.061689 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48d73a84-835d-480a-92c3-3edf7ed142ea', 'scsi-SQEMU_QEMU_HARDDISK_48d73a84-835d-480a-92c3-3edf7ed142ea'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '48d73a84', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48d73a84-835d-480a-92c3-3edf7ed142ea-part16', 'scsi-SQEMU_QEMU_HARDDISK_48d73a84-835d-480a-92c3-3edf7ed142ea-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48d73a84-835d-480a-92c3-3edf7ed142ea-part14', 'scsi-SQEMU_QEMU_HARDDISK_48d73a84-835d-480a-92c3-3edf7ed142ea-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48d73a84-835d-480a-92c3-3edf7ed142ea-part15', 'scsi-SQEMU_QEMU_HARDDISK_48d73a84-835d-480a-92c3-3edf7ed142ea-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48d73a84-835d-480a-92c3-3edf7ed142ea-part1', 'scsi-SQEMU_QEMU_HARDDISK_48d73a84-835d-480a-92c3-3edf7ed142ea-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-26 05:43:09.438975 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:43:09.439120 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:43:09.439136 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ZJyEql-gfRL-jtJK-N8Jw-RgDI-1ZPO-4R3wgt', 'dm-uuid-CRYPT-LUKS2-1d39f6c51f6c463099cda410ca5e45d8-ZJyEql-gfRL-jtJK-N8Jw-RgDI-1ZPO-4R3wgt'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-26 05:43:09.439147 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:43:09.439155 | orchestrator | 2026-03-26 05:43:09.439163 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-26 05:43:09.439172 | orchestrator | Thursday 26 March 2026 05:43:09 +0000 (0:00:01.435) 0:40:32.866 ******** 2026-03-26 05:43:09.439197 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:43:09.439224 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--b5eee7c3--8883--5bbe--be5a--75726e822543-osd--block--b5eee7c3--8883--5bbe--be5a--75726e822543', 'dm-uuid-LVM-O1aEkSX5V2TgXKGnqX2peNd9dQhi04NAZJyEqlgfRLjtJKN8JwRgDI1ZPO4R3wgt'], 'uuids': ['1d39f6c5-1f6c-4630-99cd-a410ca5e45d8'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'a52ec37c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['ZJyEql-gfRL-jtJK-N8Jw-RgDI-1ZPO-4R3wgt']}}, 'ansible_loop_var': 'item'})  2026-03-26 05:43:09.439233 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7e352b46-e023-45cf-8a88-51cc46240a44', 'scsi-SQEMU_QEMU_HARDDISK_7e352b46-e023-45cf-8a88-51cc46240a44'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '7e352b46', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:43:09.439257 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-eoBjP8-dDdJ-3FQm-pH7P-5B72-c1L3-mABWfX', 'scsi-0QEMU_QEMU_HARDDISK_7db5f133-fe7b-42a4-ad57-b076dc1856ab', 'scsi-SQEMU_QEMU_HARDDISK_7db5f133-fe7b-42a4-ad57-b076dc1856ab'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '7db5f133', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--a652979e--9f40--503a--bbc8--6de5e605991e-osd--block--a652979e--9f40--503a--bbc8--6de5e605991e']}}, 'ansible_loop_var': 'item'})  2026-03-26 05:43:09.439267 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:43:09.439277 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:43:09.439290 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-26-01-38-09-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:43:09.439298 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:43:09.439310 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-2h2U7S-LeR6-PGwr-u1xY-81U9-rrCs-8siESG', 'dm-uuid-CRYPT-LUKS2-741ece0a80b8415aa2e2dcc695db5f53-2h2U7S-LeR6-PGwr-u1xY-81U9-rrCs-8siESG'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:43:14.799654 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:43:14.799762 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--a652979e--9f40--503a--bbc8--6de5e605991e-osd--block--a652979e--9f40--503a--bbc8--6de5e605991e', 'dm-uuid-LVM-86WEu6duX2Pejl3asW6viK3fsh4aqvqg2h2U7SLeR6PGwru1xY81U9rrCs8siESG'], 'uuids': ['741ece0a-80b8-415a-a2e2-dcc695db5f53'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '7db5f133', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['2h2U7S-LeR6-PGwr-u1xY-81U9-rrCs-8siESG']}}, 'ansible_loop_var': 'item'})  2026-03-26 05:43:14.799793 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-Oy69b4-OcVV-F2KD-vi5G-C8ns-n3Cu-1PhYTB', 'scsi-0QEMU_QEMU_HARDDISK_a52ec37c-b4ea-4f83-9b16-3c0f6ce85263', 'scsi-SQEMU_QEMU_HARDDISK_a52ec37c-b4ea-4f83-9b16-3c0f6ce85263'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'a52ec37c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--b5eee7c3--8883--5bbe--be5a--75726e822543-osd--block--b5eee7c3--8883--5bbe--be5a--75726e822543']}}, 'ansible_loop_var': 'item'})  2026-03-26 05:43:14.799829 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:43:14.799859 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48d73a84-835d-480a-92c3-3edf7ed142ea', 'scsi-SQEMU_QEMU_HARDDISK_48d73a84-835d-480a-92c3-3edf7ed142ea'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '48d73a84', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48d73a84-835d-480a-92c3-3edf7ed142ea-part16', 'scsi-SQEMU_QEMU_HARDDISK_48d73a84-835d-480a-92c3-3edf7ed142ea-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48d73a84-835d-480a-92c3-3edf7ed142ea-part14', 'scsi-SQEMU_QEMU_HARDDISK_48d73a84-835d-480a-92c3-3edf7ed142ea-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48d73a84-835d-480a-92c3-3edf7ed142ea-part15', 'scsi-SQEMU_QEMU_HARDDISK_48d73a84-835d-480a-92c3-3edf7ed142ea-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48d73a84-835d-480a-92c3-3edf7ed142ea-part1', 'scsi-SQEMU_QEMU_HARDDISK_48d73a84-835d-480a-92c3-3edf7ed142ea-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:43:14.799877 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:43:14.799895 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:43:14.799906 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ZJyEql-gfRL-jtJK-N8Jw-RgDI-1ZPO-4R3wgt', 'dm-uuid-CRYPT-LUKS2-1d39f6c51f6c463099cda410ca5e45d8-ZJyEql-gfRL-jtJK-N8Jw-RgDI-1ZPO-4R3wgt'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:43:14.799918 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:43:14.799929 | orchestrator | 2026-03-26 05:43:14.799940 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-26 05:43:14.799951 | orchestrator | Thursday 26 March 2026 05:43:10 +0000 (0:00:01.431) 0:40:34.298 ******** 2026-03-26 05:43:14.799960 | orchestrator | ok: [testbed-node-4] 2026-03-26 05:43:14.799971 | orchestrator | 2026-03-26 05:43:14.799981 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-26 05:43:14.799990 | orchestrator | Thursday 26 March 2026 05:43:12 +0000 (0:00:01.510) 0:40:35.808 ******** 2026-03-26 05:43:14.800000 | orchestrator | ok: [testbed-node-4] 2026-03-26 05:43:14.800009 | orchestrator | 2026-03-26 05:43:14.800018 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-26 05:43:14.800028 | orchestrator | Thursday 26 March 2026 05:43:13 +0000 (0:00:01.121) 0:40:36.930 ******** 2026-03-26 05:43:14.800065 | orchestrator | ok: [testbed-node-4] 2026-03-26 05:43:14.800075 | orchestrator | 2026-03-26 05:43:14.800084 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-26 05:43:14.800100 | orchestrator | Thursday 26 March 2026 05:43:14 +0000 (0:00:01.521) 0:40:38.451 ******** 2026-03-26 05:43:57.024991 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:43:57.025152 | orchestrator | 2026-03-26 05:43:57.025169 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-26 05:43:57.025182 | orchestrator | Thursday 26 March 2026 05:43:15 +0000 (0:00:01.121) 0:40:39.573 ******** 2026-03-26 05:43:57.025193 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:43:57.025204 | orchestrator | 2026-03-26 05:43:57.025215 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-26 05:43:57.025226 | orchestrator | Thursday 26 March 2026 05:43:17 +0000 (0:00:01.231) 0:40:40.805 ******** 2026-03-26 05:43:57.025237 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:43:57.025248 | orchestrator | 2026-03-26 05:43:57.025259 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-26 05:43:57.025270 | orchestrator | Thursday 26 March 2026 05:43:18 +0000 (0:00:01.132) 0:40:41.938 ******** 2026-03-26 05:43:57.025281 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-03-26 05:43:57.025293 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-03-26 05:43:57.025328 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-03-26 05:43:57.025340 | orchestrator | 2026-03-26 05:43:57.025351 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-26 05:43:57.025361 | orchestrator | Thursday 26 March 2026 05:43:20 +0000 (0:00:02.059) 0:40:43.997 ******** 2026-03-26 05:43:57.025372 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-26 05:43:57.025383 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-26 05:43:57.025394 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-26 05:43:57.025404 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:43:57.025415 | orchestrator | 2026-03-26 05:43:57.025425 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-26 05:43:57.025436 | orchestrator | Thursday 26 March 2026 05:43:21 +0000 (0:00:01.160) 0:40:45.158 ******** 2026-03-26 05:43:57.025447 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-4 2026-03-26 05:43:57.025458 | orchestrator | 2026-03-26 05:43:57.025470 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-26 05:43:57.025481 | orchestrator | Thursday 26 March 2026 05:43:22 +0000 (0:00:01.262) 0:40:46.421 ******** 2026-03-26 05:43:57.025492 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:43:57.025502 | orchestrator | 2026-03-26 05:43:57.025513 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-26 05:43:57.025524 | orchestrator | Thursday 26 March 2026 05:43:23 +0000 (0:00:01.159) 0:40:47.581 ******** 2026-03-26 05:43:57.025551 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:43:57.025564 | orchestrator | 2026-03-26 05:43:57.025577 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-26 05:43:57.025589 | orchestrator | Thursday 26 March 2026 05:43:25 +0000 (0:00:01.127) 0:40:48.708 ******** 2026-03-26 05:43:57.025602 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:43:57.025615 | orchestrator | 2026-03-26 05:43:57.025627 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-26 05:43:57.025639 | orchestrator | Thursday 26 March 2026 05:43:26 +0000 (0:00:01.137) 0:40:49.845 ******** 2026-03-26 05:43:57.025651 | orchestrator | ok: [testbed-node-4] 2026-03-26 05:43:57.025664 | orchestrator | 2026-03-26 05:43:57.025677 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-26 05:43:57.025689 | orchestrator | Thursday 26 March 2026 05:43:27 +0000 (0:00:01.247) 0:40:51.093 ******** 2026-03-26 05:43:57.025702 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-26 05:43:57.025715 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-26 05:43:57.025728 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-26 05:43:57.025740 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:43:57.025752 | orchestrator | 2026-03-26 05:43:57.025764 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-26 05:43:57.025776 | orchestrator | Thursday 26 March 2026 05:43:28 +0000 (0:00:01.412) 0:40:52.506 ******** 2026-03-26 05:43:57.025788 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-26 05:43:57.025800 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-26 05:43:57.025812 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-26 05:43:57.025825 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:43:57.025837 | orchestrator | 2026-03-26 05:43:57.025849 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-26 05:43:57.025862 | orchestrator | Thursday 26 March 2026 05:43:30 +0000 (0:00:01.419) 0:40:53.925 ******** 2026-03-26 05:43:57.025874 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-26 05:43:57.025886 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-26 05:43:57.025897 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-26 05:43:57.025915 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:43:57.025926 | orchestrator | 2026-03-26 05:43:57.025937 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-26 05:43:57.025947 | orchestrator | Thursday 26 March 2026 05:43:31 +0000 (0:00:01.426) 0:40:55.352 ******** 2026-03-26 05:43:57.025958 | orchestrator | ok: [testbed-node-4] 2026-03-26 05:43:57.025969 | orchestrator | 2026-03-26 05:43:57.025979 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-26 05:43:57.025990 | orchestrator | Thursday 26 March 2026 05:43:32 +0000 (0:00:01.248) 0:40:56.600 ******** 2026-03-26 05:43:57.026088 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-26 05:43:57.026100 | orchestrator | 2026-03-26 05:43:57.026111 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-26 05:43:57.026122 | orchestrator | Thursday 26 March 2026 05:43:34 +0000 (0:00:01.457) 0:40:58.057 ******** 2026-03-26 05:43:57.026150 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-26 05:43:57.026162 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-26 05:43:57.026173 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-26 05:43:57.026183 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-26 05:43:57.026194 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-03-26 05:43:57.026205 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-26 05:43:57.026215 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-26 05:43:57.026226 | orchestrator | 2026-03-26 05:43:57.026236 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-26 05:43:57.026247 | orchestrator | Thursday 26 March 2026 05:43:36 +0000 (0:00:02.195) 0:41:00.253 ******** 2026-03-26 05:43:57.026258 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-26 05:43:57.026268 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-26 05:43:57.026279 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-26 05:43:57.026290 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-26 05:43:57.026301 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-03-26 05:43:57.026311 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-26 05:43:57.026322 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-26 05:43:57.026333 | orchestrator | 2026-03-26 05:43:57.026343 | orchestrator | TASK [Get osd numbers - non container] ***************************************** 2026-03-26 05:43:57.026354 | orchestrator | Thursday 26 March 2026 05:43:39 +0000 (0:00:02.419) 0:41:02.672 ******** 2026-03-26 05:43:57.026365 | orchestrator | ok: [testbed-node-4] 2026-03-26 05:43:57.026375 | orchestrator | 2026-03-26 05:43:57.026386 | orchestrator | TASK [Set num_osds] ************************************************************ 2026-03-26 05:43:57.026396 | orchestrator | Thursday 26 March 2026 05:43:40 +0000 (0:00:01.127) 0:41:03.800 ******** 2026-03-26 05:43:57.026407 | orchestrator | ok: [testbed-node-4] 2026-03-26 05:43:57.026418 | orchestrator | 2026-03-26 05:43:57.026428 | orchestrator | TASK [Set_fact container_exec_cmd_osd] ***************************************** 2026-03-26 05:43:57.026439 | orchestrator | Thursday 26 March 2026 05:43:40 +0000 (0:00:00.790) 0:41:04.591 ******** 2026-03-26 05:43:57.026449 | orchestrator | ok: [testbed-node-4] 2026-03-26 05:43:57.026460 | orchestrator | 2026-03-26 05:43:57.026477 | orchestrator | TASK [Stop ceph osd] *********************************************************** 2026-03-26 05:43:57.026488 | orchestrator | Thursday 26 March 2026 05:43:41 +0000 (0:00:00.905) 0:41:05.496 ******** 2026-03-26 05:43:57.026499 | orchestrator | changed: [testbed-node-4] => (item=2) 2026-03-26 05:43:57.026518 | orchestrator | changed: [testbed-node-4] => (item=5) 2026-03-26 05:43:57.026528 | orchestrator | 2026-03-26 05:43:57.026539 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-26 05:43:57.026549 | orchestrator | Thursday 26 March 2026 05:43:45 +0000 (0:00:03.814) 0:41:09.311 ******** 2026-03-26 05:43:57.026560 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-4 2026-03-26 05:43:57.026571 | orchestrator | 2026-03-26 05:43:57.026581 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-26 05:43:57.026592 | orchestrator | Thursday 26 March 2026 05:43:46 +0000 (0:00:01.107) 0:41:10.419 ******** 2026-03-26 05:43:57.026603 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-4 2026-03-26 05:43:57.026613 | orchestrator | 2026-03-26 05:43:57.026624 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-26 05:43:57.026635 | orchestrator | Thursday 26 March 2026 05:43:47 +0000 (0:00:01.151) 0:41:11.570 ******** 2026-03-26 05:43:57.026645 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:43:57.026656 | orchestrator | 2026-03-26 05:43:57.026666 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-26 05:43:57.026677 | orchestrator | Thursday 26 March 2026 05:43:49 +0000 (0:00:01.134) 0:41:12.704 ******** 2026-03-26 05:43:57.026687 | orchestrator | ok: [testbed-node-4] 2026-03-26 05:43:57.026698 | orchestrator | 2026-03-26 05:43:57.026709 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-26 05:43:57.026720 | orchestrator | Thursday 26 March 2026 05:43:50 +0000 (0:00:01.506) 0:41:14.211 ******** 2026-03-26 05:43:57.026730 | orchestrator | ok: [testbed-node-4] 2026-03-26 05:43:57.026741 | orchestrator | 2026-03-26 05:43:57.026752 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-26 05:43:57.026762 | orchestrator | Thursday 26 March 2026 05:43:52 +0000 (0:00:01.549) 0:41:15.761 ******** 2026-03-26 05:43:57.026773 | orchestrator | ok: [testbed-node-4] 2026-03-26 05:43:57.026783 | orchestrator | 2026-03-26 05:43:57.026794 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-26 05:43:57.026804 | orchestrator | Thursday 26 March 2026 05:43:53 +0000 (0:00:01.497) 0:41:17.258 ******** 2026-03-26 05:43:57.026815 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:43:57.026825 | orchestrator | 2026-03-26 05:43:57.026836 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-26 05:43:57.026847 | orchestrator | Thursday 26 March 2026 05:43:54 +0000 (0:00:01.149) 0:41:18.408 ******** 2026-03-26 05:43:57.026857 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:43:57.026867 | orchestrator | 2026-03-26 05:43:57.026878 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-26 05:43:57.026889 | orchestrator | Thursday 26 March 2026 05:43:55 +0000 (0:00:01.151) 0:41:19.560 ******** 2026-03-26 05:43:57.026899 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:43:57.026910 | orchestrator | 2026-03-26 05:43:57.026927 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-26 05:44:37.336591 | orchestrator | Thursday 26 March 2026 05:43:57 +0000 (0:00:01.108) 0:41:20.668 ******** 2026-03-26 05:44:37.336701 | orchestrator | ok: [testbed-node-4] 2026-03-26 05:44:37.336716 | orchestrator | 2026-03-26 05:44:37.336728 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-26 05:44:37.336738 | orchestrator | Thursday 26 March 2026 05:43:58 +0000 (0:00:01.534) 0:41:22.202 ******** 2026-03-26 05:44:37.336747 | orchestrator | ok: [testbed-node-4] 2026-03-26 05:44:37.336757 | orchestrator | 2026-03-26 05:44:37.336767 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-26 05:44:37.336776 | orchestrator | Thursday 26 March 2026 05:44:00 +0000 (0:00:01.608) 0:41:23.811 ******** 2026-03-26 05:44:37.336786 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:44:37.336796 | orchestrator | 2026-03-26 05:44:37.336806 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-26 05:44:37.336838 | orchestrator | Thursday 26 March 2026 05:44:00 +0000 (0:00:00.750) 0:41:24.562 ******** 2026-03-26 05:44:37.336848 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:44:37.336857 | orchestrator | 2026-03-26 05:44:37.336867 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-26 05:44:37.336876 | orchestrator | Thursday 26 March 2026 05:44:01 +0000 (0:00:00.796) 0:41:25.359 ******** 2026-03-26 05:44:37.336885 | orchestrator | ok: [testbed-node-4] 2026-03-26 05:44:37.336895 | orchestrator | 2026-03-26 05:44:37.336904 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-26 05:44:37.336914 | orchestrator | Thursday 26 March 2026 05:44:02 +0000 (0:00:00.812) 0:41:26.171 ******** 2026-03-26 05:44:37.336923 | orchestrator | ok: [testbed-node-4] 2026-03-26 05:44:37.336933 | orchestrator | 2026-03-26 05:44:37.336942 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-26 05:44:37.336951 | orchestrator | Thursday 26 March 2026 05:44:03 +0000 (0:00:00.775) 0:41:26.947 ******** 2026-03-26 05:44:37.336961 | orchestrator | ok: [testbed-node-4] 2026-03-26 05:44:37.337013 | orchestrator | 2026-03-26 05:44:37.337023 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-26 05:44:37.337033 | orchestrator | Thursday 26 March 2026 05:44:04 +0000 (0:00:00.816) 0:41:27.763 ******** 2026-03-26 05:44:37.337042 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:44:37.337051 | orchestrator | 2026-03-26 05:44:37.337061 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-26 05:44:37.337071 | orchestrator | Thursday 26 March 2026 05:44:04 +0000 (0:00:00.812) 0:41:28.576 ******** 2026-03-26 05:44:37.337080 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:44:37.337090 | orchestrator | 2026-03-26 05:44:37.337099 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-26 05:44:37.337122 | orchestrator | Thursday 26 March 2026 05:44:05 +0000 (0:00:00.765) 0:41:29.341 ******** 2026-03-26 05:44:37.337134 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:44:37.337144 | orchestrator | 2026-03-26 05:44:37.337156 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-26 05:44:37.337166 | orchestrator | Thursday 26 March 2026 05:44:06 +0000 (0:00:00.781) 0:41:30.123 ******** 2026-03-26 05:44:37.337177 | orchestrator | ok: [testbed-node-4] 2026-03-26 05:44:37.337187 | orchestrator | 2026-03-26 05:44:37.337198 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-26 05:44:37.337208 | orchestrator | Thursday 26 March 2026 05:44:07 +0000 (0:00:00.786) 0:41:30.910 ******** 2026-03-26 05:44:37.337219 | orchestrator | ok: [testbed-node-4] 2026-03-26 05:44:37.337229 | orchestrator | 2026-03-26 05:44:37.337240 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-03-26 05:44:37.337251 | orchestrator | Thursday 26 March 2026 05:44:08 +0000 (0:00:00.939) 0:41:31.849 ******** 2026-03-26 05:44:37.337261 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:44:37.337273 | orchestrator | 2026-03-26 05:44:37.337283 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-03-26 05:44:37.337294 | orchestrator | Thursday 26 March 2026 05:44:09 +0000 (0:00:00.812) 0:41:32.662 ******** 2026-03-26 05:44:37.337305 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:44:37.337315 | orchestrator | 2026-03-26 05:44:37.337326 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-03-26 05:44:37.337337 | orchestrator | Thursday 26 March 2026 05:44:09 +0000 (0:00:00.792) 0:41:33.454 ******** 2026-03-26 05:44:37.337348 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:44:37.337359 | orchestrator | 2026-03-26 05:44:37.337369 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-03-26 05:44:37.337380 | orchestrator | Thursday 26 March 2026 05:44:10 +0000 (0:00:00.784) 0:41:34.239 ******** 2026-03-26 05:44:37.337390 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:44:37.337401 | orchestrator | 2026-03-26 05:44:37.337413 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-03-26 05:44:37.337431 | orchestrator | Thursday 26 March 2026 05:44:11 +0000 (0:00:00.781) 0:41:35.020 ******** 2026-03-26 05:44:37.337442 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:44:37.337453 | orchestrator | 2026-03-26 05:44:37.337464 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-03-26 05:44:37.337475 | orchestrator | Thursday 26 March 2026 05:44:12 +0000 (0:00:00.790) 0:41:35.810 ******** 2026-03-26 05:44:37.337484 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:44:37.337494 | orchestrator | 2026-03-26 05:44:37.337503 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-03-26 05:44:37.337513 | orchestrator | Thursday 26 March 2026 05:44:12 +0000 (0:00:00.798) 0:41:36.609 ******** 2026-03-26 05:44:37.337522 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:44:37.337531 | orchestrator | 2026-03-26 05:44:37.337541 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-03-26 05:44:37.337551 | orchestrator | Thursday 26 March 2026 05:44:13 +0000 (0:00:00.754) 0:41:37.363 ******** 2026-03-26 05:44:37.337561 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:44:37.337570 | orchestrator | 2026-03-26 05:44:37.337580 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-03-26 05:44:37.337589 | orchestrator | Thursday 26 March 2026 05:44:14 +0000 (0:00:00.772) 0:41:38.136 ******** 2026-03-26 05:44:37.337614 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:44:37.337625 | orchestrator | 2026-03-26 05:44:37.337634 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-03-26 05:44:37.337644 | orchestrator | Thursday 26 March 2026 05:44:15 +0000 (0:00:00.825) 0:41:38.962 ******** 2026-03-26 05:44:37.337653 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:44:37.337663 | orchestrator | 2026-03-26 05:44:37.337672 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-03-26 05:44:37.337682 | orchestrator | Thursday 26 March 2026 05:44:16 +0000 (0:00:00.790) 0:41:39.752 ******** 2026-03-26 05:44:37.337691 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:44:37.337700 | orchestrator | 2026-03-26 05:44:37.337710 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-03-26 05:44:37.337719 | orchestrator | Thursday 26 March 2026 05:44:16 +0000 (0:00:00.762) 0:41:40.515 ******** 2026-03-26 05:44:37.337729 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:44:37.337738 | orchestrator | 2026-03-26 05:44:37.337747 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-26 05:44:37.337757 | orchestrator | Thursday 26 March 2026 05:44:17 +0000 (0:00:00.931) 0:41:41.447 ******** 2026-03-26 05:44:37.337766 | orchestrator | ok: [testbed-node-4] 2026-03-26 05:44:37.337776 | orchestrator | 2026-03-26 05:44:37.337785 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-26 05:44:37.337794 | orchestrator | Thursday 26 March 2026 05:44:19 +0000 (0:00:01.609) 0:41:43.057 ******** 2026-03-26 05:44:37.337804 | orchestrator | ok: [testbed-node-4] 2026-03-26 05:44:37.337813 | orchestrator | 2026-03-26 05:44:37.337823 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-26 05:44:37.337832 | orchestrator | Thursday 26 March 2026 05:44:21 +0000 (0:00:01.837) 0:41:44.894 ******** 2026-03-26 05:44:37.337842 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-4 2026-03-26 05:44:37.337852 | orchestrator | 2026-03-26 05:44:37.337862 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-26 05:44:37.337871 | orchestrator | Thursday 26 March 2026 05:44:22 +0000 (0:00:01.157) 0:41:46.052 ******** 2026-03-26 05:44:37.337881 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:44:37.337890 | orchestrator | 2026-03-26 05:44:37.337899 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-26 05:44:37.337909 | orchestrator | Thursday 26 March 2026 05:44:23 +0000 (0:00:01.185) 0:41:47.237 ******** 2026-03-26 05:44:37.337918 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:44:37.337933 | orchestrator | 2026-03-26 05:44:37.337943 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-26 05:44:37.337957 | orchestrator | Thursday 26 March 2026 05:44:24 +0000 (0:00:01.156) 0:41:48.394 ******** 2026-03-26 05:44:37.338000 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-26 05:44:37.338010 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-26 05:44:37.338094 | orchestrator | 2026-03-26 05:44:37.338113 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-26 05:44:37.338132 | orchestrator | Thursday 26 March 2026 05:44:26 +0000 (0:00:01.772) 0:41:50.167 ******** 2026-03-26 05:44:37.338150 | orchestrator | ok: [testbed-node-4] 2026-03-26 05:44:37.338168 | orchestrator | 2026-03-26 05:44:37.338182 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-26 05:44:37.338192 | orchestrator | Thursday 26 March 2026 05:44:27 +0000 (0:00:01.473) 0:41:51.640 ******** 2026-03-26 05:44:37.338201 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:44:37.338211 | orchestrator | 2026-03-26 05:44:37.338220 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-26 05:44:37.338229 | orchestrator | Thursday 26 March 2026 05:44:29 +0000 (0:00:01.153) 0:41:52.793 ******** 2026-03-26 05:44:37.338239 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:44:37.338248 | orchestrator | 2026-03-26 05:44:37.338258 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-26 05:44:37.338267 | orchestrator | Thursday 26 March 2026 05:44:29 +0000 (0:00:00.785) 0:41:53.580 ******** 2026-03-26 05:44:37.338276 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:44:37.338286 | orchestrator | 2026-03-26 05:44:37.338295 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-26 05:44:37.338304 | orchestrator | Thursday 26 March 2026 05:44:30 +0000 (0:00:00.785) 0:41:54.365 ******** 2026-03-26 05:44:37.338314 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-4 2026-03-26 05:44:37.338323 | orchestrator | 2026-03-26 05:44:37.338333 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-26 05:44:37.338342 | orchestrator | Thursday 26 March 2026 05:44:32 +0000 (0:00:01.379) 0:41:55.744 ******** 2026-03-26 05:44:37.338351 | orchestrator | ok: [testbed-node-4] 2026-03-26 05:44:37.338361 | orchestrator | 2026-03-26 05:44:37.338370 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-26 05:44:37.338380 | orchestrator | Thursday 26 March 2026 05:44:33 +0000 (0:00:01.802) 0:41:57.547 ******** 2026-03-26 05:44:37.338389 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-26 05:44:37.338399 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-26 05:44:37.338408 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-26 05:44:37.338417 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:44:37.338427 | orchestrator | 2026-03-26 05:44:37.338436 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-26 05:44:37.338446 | orchestrator | Thursday 26 March 2026 05:44:35 +0000 (0:00:01.138) 0:41:58.685 ******** 2026-03-26 05:44:37.338455 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:44:37.338464 | orchestrator | 2026-03-26 05:44:37.338474 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-26 05:44:37.338483 | orchestrator | Thursday 26 March 2026 05:44:36 +0000 (0:00:01.134) 0:41:59.820 ******** 2026-03-26 05:44:37.338502 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:45:19.847873 | orchestrator | 2026-03-26 05:45:19.848040 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-26 05:45:19.848059 | orchestrator | Thursday 26 March 2026 05:44:37 +0000 (0:00:01.165) 0:42:00.985 ******** 2026-03-26 05:45:19.848071 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:45:19.848083 | orchestrator | 2026-03-26 05:45:19.848118 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-26 05:45:19.848129 | orchestrator | Thursday 26 March 2026 05:44:38 +0000 (0:00:01.130) 0:42:02.116 ******** 2026-03-26 05:45:19.848140 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:45:19.848151 | orchestrator | 2026-03-26 05:45:19.848162 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-26 05:45:19.848172 | orchestrator | Thursday 26 March 2026 05:44:39 +0000 (0:00:01.184) 0:42:03.301 ******** 2026-03-26 05:45:19.848183 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:45:19.848194 | orchestrator | 2026-03-26 05:45:19.848205 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-26 05:45:19.848215 | orchestrator | Thursday 26 March 2026 05:44:40 +0000 (0:00:00.798) 0:42:04.099 ******** 2026-03-26 05:45:19.848226 | orchestrator | ok: [testbed-node-4] 2026-03-26 05:45:19.848237 | orchestrator | 2026-03-26 05:45:19.848248 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-26 05:45:19.848259 | orchestrator | Thursday 26 March 2026 05:44:42 +0000 (0:00:02.151) 0:42:06.251 ******** 2026-03-26 05:45:19.848270 | orchestrator | ok: [testbed-node-4] 2026-03-26 05:45:19.848280 | orchestrator | 2026-03-26 05:45:19.848306 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-26 05:45:19.848317 | orchestrator | Thursday 26 March 2026 05:44:43 +0000 (0:00:00.758) 0:42:07.009 ******** 2026-03-26 05:45:19.848327 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-4 2026-03-26 05:45:19.848349 | orchestrator | 2026-03-26 05:45:19.848360 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-26 05:45:19.848370 | orchestrator | Thursday 26 March 2026 05:44:44 +0000 (0:00:01.118) 0:42:08.128 ******** 2026-03-26 05:45:19.848381 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:45:19.848391 | orchestrator | 2026-03-26 05:45:19.848402 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-26 05:45:19.848414 | orchestrator | Thursday 26 March 2026 05:44:45 +0000 (0:00:01.266) 0:42:09.394 ******** 2026-03-26 05:45:19.848426 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:45:19.848438 | orchestrator | 2026-03-26 05:45:19.848450 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-26 05:45:19.848477 | orchestrator | Thursday 26 March 2026 05:44:46 +0000 (0:00:01.150) 0:42:10.545 ******** 2026-03-26 05:45:19.848496 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:45:19.848515 | orchestrator | 2026-03-26 05:45:19.848535 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-26 05:45:19.848556 | orchestrator | Thursday 26 March 2026 05:44:48 +0000 (0:00:01.157) 0:42:11.702 ******** 2026-03-26 05:45:19.848576 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:45:19.848596 | orchestrator | 2026-03-26 05:45:19.848620 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-26 05:45:19.848645 | orchestrator | Thursday 26 March 2026 05:44:49 +0000 (0:00:01.124) 0:42:12.827 ******** 2026-03-26 05:45:19.848665 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:45:19.848694 | orchestrator | 2026-03-26 05:45:19.848716 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-26 05:45:19.848735 | orchestrator | Thursday 26 March 2026 05:44:50 +0000 (0:00:01.147) 0:42:13.974 ******** 2026-03-26 05:45:19.848756 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:45:19.848777 | orchestrator | 2026-03-26 05:45:19.848797 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-26 05:45:19.848822 | orchestrator | Thursday 26 March 2026 05:44:51 +0000 (0:00:01.118) 0:42:15.093 ******** 2026-03-26 05:45:19.848849 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:45:19.848869 | orchestrator | 2026-03-26 05:45:19.848888 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-26 05:45:19.848908 | orchestrator | Thursday 26 March 2026 05:44:52 +0000 (0:00:01.139) 0:42:16.233 ******** 2026-03-26 05:45:19.848928 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:45:19.848985 | orchestrator | 2026-03-26 05:45:19.848996 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-26 05:45:19.849007 | orchestrator | Thursday 26 March 2026 05:44:53 +0000 (0:00:01.123) 0:42:17.356 ******** 2026-03-26 05:45:19.849018 | orchestrator | ok: [testbed-node-4] 2026-03-26 05:45:19.849029 | orchestrator | 2026-03-26 05:45:19.849039 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-26 05:45:19.849050 | orchestrator | Thursday 26 March 2026 05:44:54 +0000 (0:00:00.834) 0:42:18.191 ******** 2026-03-26 05:45:19.849060 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-4 2026-03-26 05:45:19.849072 | orchestrator | 2026-03-26 05:45:19.849083 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-26 05:45:19.849093 | orchestrator | Thursday 26 March 2026 05:44:55 +0000 (0:00:01.149) 0:42:19.340 ******** 2026-03-26 05:45:19.849104 | orchestrator | ok: [testbed-node-4] => (item=/etc/ceph) 2026-03-26 05:45:19.849115 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/) 2026-03-26 05:45:19.849125 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-03-26 05:45:19.849136 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-03-26 05:45:19.849147 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-03-26 05:45:19.849157 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-03-26 05:45:19.849168 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-03-26 05:45:19.849179 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-03-26 05:45:19.849190 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-26 05:45:19.849221 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-26 05:45:19.849232 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-26 05:45:19.849243 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-26 05:45:19.849254 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-26 05:45:19.849265 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-26 05:45:19.849275 | orchestrator | ok: [testbed-node-4] => (item=/var/run/ceph) 2026-03-26 05:45:19.849286 | orchestrator | ok: [testbed-node-4] => (item=/var/log/ceph) 2026-03-26 05:45:19.849296 | orchestrator | 2026-03-26 05:45:19.849307 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-26 05:45:19.849318 | orchestrator | Thursday 26 March 2026 05:45:01 +0000 (0:00:06.153) 0:42:25.494 ******** 2026-03-26 05:45:19.849328 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-4 2026-03-26 05:45:19.849339 | orchestrator | 2026-03-26 05:45:19.849349 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-03-26 05:45:19.849360 | orchestrator | Thursday 26 March 2026 05:45:03 +0000 (0:00:01.273) 0:42:26.768 ******** 2026-03-26 05:45:19.849371 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-26 05:45:19.849383 | orchestrator | 2026-03-26 05:45:19.849393 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-03-26 05:45:19.849404 | orchestrator | Thursday 26 March 2026 05:45:04 +0000 (0:00:01.488) 0:42:28.257 ******** 2026-03-26 05:45:19.849414 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-26 05:45:19.849425 | orchestrator | 2026-03-26 05:45:19.849436 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-26 05:45:19.849446 | orchestrator | Thursday 26 March 2026 05:45:06 +0000 (0:00:01.616) 0:42:29.874 ******** 2026-03-26 05:45:19.849457 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:45:19.849467 | orchestrator | 2026-03-26 05:45:19.849478 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-26 05:45:19.849495 | orchestrator | Thursday 26 March 2026 05:45:06 +0000 (0:00:00.734) 0:42:30.608 ******** 2026-03-26 05:45:19.849506 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:45:19.849517 | orchestrator | 2026-03-26 05:45:19.849527 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-26 05:45:19.849546 | orchestrator | Thursday 26 March 2026 05:45:07 +0000 (0:00:00.749) 0:42:31.357 ******** 2026-03-26 05:45:19.849557 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:45:19.849568 | orchestrator | 2026-03-26 05:45:19.849578 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-26 05:45:19.849589 | orchestrator | Thursday 26 March 2026 05:45:08 +0000 (0:00:00.770) 0:42:32.128 ******** 2026-03-26 05:45:19.849599 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:45:19.849610 | orchestrator | 2026-03-26 05:45:19.849621 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-26 05:45:19.849631 | orchestrator | Thursday 26 March 2026 05:45:09 +0000 (0:00:00.838) 0:42:32.967 ******** 2026-03-26 05:45:19.849642 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:45:19.849652 | orchestrator | 2026-03-26 05:45:19.849663 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-26 05:45:19.849673 | orchestrator | Thursday 26 March 2026 05:45:10 +0000 (0:00:00.737) 0:42:33.704 ******** 2026-03-26 05:45:19.849684 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:45:19.849694 | orchestrator | 2026-03-26 05:45:19.849709 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-26 05:45:19.849728 | orchestrator | Thursday 26 March 2026 05:45:10 +0000 (0:00:00.789) 0:42:34.493 ******** 2026-03-26 05:45:19.849746 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:45:19.849764 | orchestrator | 2026-03-26 05:45:19.849782 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-26 05:45:19.849799 | orchestrator | Thursday 26 March 2026 05:45:11 +0000 (0:00:00.757) 0:42:35.250 ******** 2026-03-26 05:45:19.849817 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:45:19.849834 | orchestrator | 2026-03-26 05:45:19.849851 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-26 05:45:19.849868 | orchestrator | Thursday 26 March 2026 05:45:12 +0000 (0:00:00.745) 0:42:35.996 ******** 2026-03-26 05:45:19.849885 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:45:19.849903 | orchestrator | 2026-03-26 05:45:19.849919 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-26 05:45:19.849960 | orchestrator | Thursday 26 March 2026 05:45:13 +0000 (0:00:00.787) 0:42:36.784 ******** 2026-03-26 05:45:19.849979 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:45:19.849998 | orchestrator | 2026-03-26 05:45:19.850082 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-26 05:45:19.850098 | orchestrator | Thursday 26 March 2026 05:45:13 +0000 (0:00:00.779) 0:42:37.563 ******** 2026-03-26 05:45:19.850109 | orchestrator | ok: [testbed-node-4] 2026-03-26 05:45:19.850120 | orchestrator | 2026-03-26 05:45:19.850130 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-26 05:45:19.850141 | orchestrator | Thursday 26 March 2026 05:45:14 +0000 (0:00:00.912) 0:42:38.476 ******** 2026-03-26 05:45:19.850152 | orchestrator | changed: [testbed-node-4 -> testbed-node-2(192.168.16.12)] 2026-03-26 05:45:19.850162 | orchestrator | 2026-03-26 05:45:19.850173 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-26 05:45:19.850184 | orchestrator | Thursday 26 March 2026 05:45:18 +0000 (0:00:04.172) 0:42:42.648 ******** 2026-03-26 05:45:19.850205 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-26 05:46:01.627708 | orchestrator | 2026-03-26 05:46:01.627857 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-26 05:46:01.627878 | orchestrator | Thursday 26 March 2026 05:45:19 +0000 (0:00:00.850) 0:42:43.498 ******** 2026-03-26 05:46:01.627994 | orchestrator | changed: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}]) 2026-03-26 05:46:01.628024 | orchestrator | changed: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}]) 2026-03-26 05:46:01.628046 | orchestrator | 2026-03-26 05:46:01.628063 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-26 05:46:01.628081 | orchestrator | Thursday 26 March 2026 05:45:27 +0000 (0:00:07.704) 0:42:51.203 ******** 2026-03-26 05:46:01.628102 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:46:01.628124 | orchestrator | 2026-03-26 05:46:01.628146 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-26 05:46:01.628166 | orchestrator | Thursday 26 March 2026 05:45:28 +0000 (0:00:00.773) 0:42:51.976 ******** 2026-03-26 05:46:01.628185 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:46:01.628205 | orchestrator | 2026-03-26 05:46:01.628226 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-26 05:46:01.628247 | orchestrator | Thursday 26 March 2026 05:45:29 +0000 (0:00:00.853) 0:42:52.829 ******** 2026-03-26 05:46:01.628268 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:46:01.628291 | orchestrator | 2026-03-26 05:46:01.628311 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-26 05:46:01.628332 | orchestrator | Thursday 26 March 2026 05:45:29 +0000 (0:00:00.816) 0:42:53.646 ******** 2026-03-26 05:46:01.628354 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:46:01.628376 | orchestrator | 2026-03-26 05:46:01.628397 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-26 05:46:01.628438 | orchestrator | Thursday 26 March 2026 05:45:30 +0000 (0:00:00.774) 0:42:54.421 ******** 2026-03-26 05:46:01.628460 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:46:01.628482 | orchestrator | 2026-03-26 05:46:01.628502 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-26 05:46:01.628525 | orchestrator | Thursday 26 March 2026 05:45:31 +0000 (0:00:00.813) 0:42:55.235 ******** 2026-03-26 05:46:01.628547 | orchestrator | ok: [testbed-node-4] 2026-03-26 05:46:01.628568 | orchestrator | 2026-03-26 05:46:01.628588 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-26 05:46:01.628608 | orchestrator | Thursday 26 March 2026 05:45:32 +0000 (0:00:00.911) 0:42:56.147 ******** 2026-03-26 05:46:01.628628 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-26 05:46:01.628648 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-26 05:46:01.628669 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-26 05:46:01.628689 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:46:01.628705 | orchestrator | 2026-03-26 05:46:01.628716 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-26 05:46:01.628727 | orchestrator | Thursday 26 March 2026 05:45:33 +0000 (0:00:01.106) 0:42:57.253 ******** 2026-03-26 05:46:01.628737 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-26 05:46:01.628748 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-26 05:46:01.628759 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-26 05:46:01.628769 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:46:01.628784 | orchestrator | 2026-03-26 05:46:01.628942 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-26 05:46:01.628981 | orchestrator | Thursday 26 March 2026 05:45:35 +0000 (0:00:01.529) 0:42:58.782 ******** 2026-03-26 05:46:01.629002 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-26 05:46:01.629021 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-26 05:46:01.629040 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-26 05:46:01.629061 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:46:01.629080 | orchestrator | 2026-03-26 05:46:01.629099 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-26 05:46:01.629119 | orchestrator | Thursday 26 March 2026 05:45:36 +0000 (0:00:01.469) 0:43:00.252 ******** 2026-03-26 05:46:01.629138 | orchestrator | ok: [testbed-node-4] 2026-03-26 05:46:01.629156 | orchestrator | 2026-03-26 05:46:01.629175 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-26 05:46:01.629194 | orchestrator | Thursday 26 March 2026 05:45:37 +0000 (0:00:00.875) 0:43:01.128 ******** 2026-03-26 05:46:01.629213 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-26 05:46:01.629233 | orchestrator | 2026-03-26 05:46:01.629252 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-26 05:46:01.629271 | orchestrator | Thursday 26 March 2026 05:45:38 +0000 (0:00:00.983) 0:43:02.111 ******** 2026-03-26 05:46:01.629290 | orchestrator | changed: [testbed-node-4] 2026-03-26 05:46:01.629308 | orchestrator | 2026-03-26 05:46:01.629327 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-03-26 05:46:01.629347 | orchestrator | Thursday 26 March 2026 05:45:39 +0000 (0:00:01.388) 0:43:03.500 ******** 2026-03-26 05:46:01.629366 | orchestrator | ok: [testbed-node-4] 2026-03-26 05:46:01.629386 | orchestrator | 2026-03-26 05:46:01.629432 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-03-26 05:46:01.629452 | orchestrator | Thursday 26 March 2026 05:45:40 +0000 (0:00:00.778) 0:43:04.279 ******** 2026-03-26 05:46:01.629471 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-26 05:46:01.629491 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-26 05:46:01.629510 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-26 05:46:01.629527 | orchestrator | 2026-03-26 05:46:01.629545 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-03-26 05:46:01.629563 | orchestrator | Thursday 26 March 2026 05:45:41 +0000 (0:00:01.361) 0:43:05.640 ******** 2026-03-26 05:46:01.629582 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-4 2026-03-26 05:46:01.629599 | orchestrator | 2026-03-26 05:46:01.629617 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-03-26 05:46:01.629635 | orchestrator | Thursday 26 March 2026 05:45:43 +0000 (0:00:01.123) 0:43:06.764 ******** 2026-03-26 05:46:01.629654 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:46:01.629673 | orchestrator | 2026-03-26 05:46:01.629690 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-03-26 05:46:01.629709 | orchestrator | Thursday 26 March 2026 05:45:44 +0000 (0:00:01.139) 0:43:07.904 ******** 2026-03-26 05:46:01.629720 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:46:01.629730 | orchestrator | 2026-03-26 05:46:01.629741 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-03-26 05:46:01.629751 | orchestrator | Thursday 26 March 2026 05:45:45 +0000 (0:00:01.177) 0:43:09.081 ******** 2026-03-26 05:46:01.629762 | orchestrator | ok: [testbed-node-4] 2026-03-26 05:46:01.629772 | orchestrator | 2026-03-26 05:46:01.629783 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-03-26 05:46:01.629793 | orchestrator | Thursday 26 March 2026 05:45:46 +0000 (0:00:01.440) 0:43:10.522 ******** 2026-03-26 05:46:01.629804 | orchestrator | ok: [testbed-node-4] 2026-03-26 05:46:01.629814 | orchestrator | 2026-03-26 05:46:01.629825 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-03-26 05:46:01.629845 | orchestrator | Thursday 26 March 2026 05:45:48 +0000 (0:00:01.161) 0:43:11.684 ******** 2026-03-26 05:46:01.629856 | orchestrator | ok: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-26 05:46:01.629875 | orchestrator | ok: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-26 05:46:01.629886 | orchestrator | ok: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-26 05:46:01.629897 | orchestrator | ok: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-26 05:46:01.629929 | orchestrator | ok: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-26 05:46:01.629939 | orchestrator | 2026-03-26 05:46:01.629950 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-03-26 05:46:01.629960 | orchestrator | Thursday 26 March 2026 05:45:50 +0000 (0:00:02.610) 0:43:14.295 ******** 2026-03-26 05:46:01.629971 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:46:01.629982 | orchestrator | 2026-03-26 05:46:01.629992 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-03-26 05:46:01.630003 | orchestrator | Thursday 26 March 2026 05:45:51 +0000 (0:00:00.776) 0:43:15.071 ******** 2026-03-26 05:46:01.630084 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-4 2026-03-26 05:46:01.630099 | orchestrator | 2026-03-26 05:46:01.630110 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-03-26 05:46:01.630120 | orchestrator | Thursday 26 March 2026 05:45:52 +0000 (0:00:01.148) 0:43:16.220 ******** 2026-03-26 05:46:01.630131 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-26 05:46:01.630141 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-03-26 05:46:01.630152 | orchestrator | 2026-03-26 05:46:01.630162 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-03-26 05:46:01.630173 | orchestrator | Thursday 26 March 2026 05:45:54 +0000 (0:00:01.848) 0:43:18.068 ******** 2026-03-26 05:46:01.630183 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-26 05:46:01.630194 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-26 05:46:01.630205 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-26 05:46:01.630215 | orchestrator | 2026-03-26 05:46:01.630225 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-03-26 05:46:01.630236 | orchestrator | Thursday 26 March 2026 05:45:57 +0000 (0:00:03.191) 0:43:21.260 ******** 2026-03-26 05:46:01.630247 | orchestrator | ok: [testbed-node-4] => (item=None) 2026-03-26 05:46:01.630257 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-26 05:46:01.630268 | orchestrator | ok: [testbed-node-4] 2026-03-26 05:46:01.630279 | orchestrator | 2026-03-26 05:46:01.630289 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-03-26 05:46:01.630300 | orchestrator | Thursday 26 March 2026 05:45:59 +0000 (0:00:01.582) 0:43:22.843 ******** 2026-03-26 05:46:01.630310 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:46:01.630321 | orchestrator | 2026-03-26 05:46:01.630331 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-03-26 05:46:01.630342 | orchestrator | Thursday 26 March 2026 05:46:00 +0000 (0:00:00.865) 0:43:23.708 ******** 2026-03-26 05:46:01.630353 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:46:01.630363 | orchestrator | 2026-03-26 05:46:01.630374 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-03-26 05:46:01.630384 | orchestrator | Thursday 26 March 2026 05:46:00 +0000 (0:00:00.802) 0:43:24.511 ******** 2026-03-26 05:46:01.630395 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:46:01.630406 | orchestrator | 2026-03-26 05:46:01.630427 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-03-26 05:47:04.519980 | orchestrator | Thursday 26 March 2026 05:46:01 +0000 (0:00:00.763) 0:43:25.275 ******** 2026-03-26 05:47:04.520092 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-4 2026-03-26 05:47:04.520130 | orchestrator | 2026-03-26 05:47:04.520143 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-03-26 05:47:04.520155 | orchestrator | Thursday 26 March 2026 05:46:02 +0000 (0:00:01.133) 0:43:26.409 ******** 2026-03-26 05:47:04.520166 | orchestrator | ok: [testbed-node-4] 2026-03-26 05:47:04.520178 | orchestrator | 2026-03-26 05:47:04.520190 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-03-26 05:47:04.520201 | orchestrator | Thursday 26 March 2026 05:46:04 +0000 (0:00:01.473) 0:43:27.882 ******** 2026-03-26 05:47:04.520212 | orchestrator | ok: [testbed-node-4] 2026-03-26 05:47:04.520223 | orchestrator | 2026-03-26 05:47:04.520233 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-03-26 05:47:04.520244 | orchestrator | Thursday 26 March 2026 05:46:07 +0000 (0:00:03.291) 0:43:31.174 ******** 2026-03-26 05:47:04.520255 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-4 2026-03-26 05:47:04.520265 | orchestrator | 2026-03-26 05:47:04.520276 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-03-26 05:47:04.520286 | orchestrator | Thursday 26 March 2026 05:46:08 +0000 (0:00:01.241) 0:43:32.416 ******** 2026-03-26 05:47:04.520297 | orchestrator | ok: [testbed-node-4] 2026-03-26 05:47:04.520308 | orchestrator | 2026-03-26 05:47:04.520318 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-03-26 05:47:04.520329 | orchestrator | Thursday 26 March 2026 05:46:10 +0000 (0:00:01.995) 0:43:34.412 ******** 2026-03-26 05:47:04.520339 | orchestrator | ok: [testbed-node-4] 2026-03-26 05:47:04.520350 | orchestrator | 2026-03-26 05:47:04.520360 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-03-26 05:47:04.520371 | orchestrator | Thursday 26 March 2026 05:46:12 +0000 (0:00:01.938) 0:43:36.350 ******** 2026-03-26 05:47:04.520382 | orchestrator | ok: [testbed-node-4] 2026-03-26 05:47:04.520393 | orchestrator | 2026-03-26 05:47:04.520403 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-03-26 05:47:04.520414 | orchestrator | Thursday 26 March 2026 05:46:14 +0000 (0:00:02.246) 0:43:38.597 ******** 2026-03-26 05:47:04.520424 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:47:04.520437 | orchestrator | 2026-03-26 05:47:04.520447 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-03-26 05:47:04.520473 | orchestrator | Thursday 26 March 2026 05:46:16 +0000 (0:00:01.201) 0:43:39.799 ******** 2026-03-26 05:47:04.520486 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:47:04.520500 | orchestrator | 2026-03-26 05:47:04.520513 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-03-26 05:47:04.520525 | orchestrator | Thursday 26 March 2026 05:46:17 +0000 (0:00:01.182) 0:43:40.981 ******** 2026-03-26 05:47:04.520538 | orchestrator | ok: [testbed-node-4] => (item=5) 2026-03-26 05:47:04.520550 | orchestrator | ok: [testbed-node-4] => (item=2) 2026-03-26 05:47:04.520562 | orchestrator | 2026-03-26 05:47:04.520575 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-03-26 05:47:04.520587 | orchestrator | Thursday 26 March 2026 05:46:19 +0000 (0:00:01.812) 0:43:42.793 ******** 2026-03-26 05:47:04.520599 | orchestrator | ok: [testbed-node-4] => (item=5) 2026-03-26 05:47:04.520612 | orchestrator | ok: [testbed-node-4] => (item=2) 2026-03-26 05:47:04.520624 | orchestrator | 2026-03-26 05:47:04.520636 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-03-26 05:47:04.520648 | orchestrator | Thursday 26 March 2026 05:46:22 +0000 (0:00:02.900) 0:43:45.694 ******** 2026-03-26 05:47:04.520660 | orchestrator | changed: [testbed-node-4] => (item=5) 2026-03-26 05:47:04.520673 | orchestrator | changed: [testbed-node-4] => (item=2) 2026-03-26 05:47:04.520686 | orchestrator | 2026-03-26 05:47:04.520698 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-03-26 05:47:04.520710 | orchestrator | Thursday 26 March 2026 05:46:26 +0000 (0:00:04.206) 0:43:49.900 ******** 2026-03-26 05:47:04.520722 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:47:04.520742 | orchestrator | 2026-03-26 05:47:04.520755 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-03-26 05:47:04.520768 | orchestrator | Thursday 26 March 2026 05:46:27 +0000 (0:00:00.889) 0:43:50.790 ******** 2026-03-26 05:47:04.520780 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:47:04.520792 | orchestrator | 2026-03-26 05:47:04.520805 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-03-26 05:47:04.520817 | orchestrator | Thursday 26 March 2026 05:46:28 +0000 (0:00:00.885) 0:43:51.676 ******** 2026-03-26 05:47:04.520829 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:47:04.520840 | orchestrator | 2026-03-26 05:47:04.520851 | orchestrator | TASK [Scan ceph-disk osds with ceph-volume if deploying nautilus] ************** 2026-03-26 05:47:04.520862 | orchestrator | Thursday 26 March 2026 05:46:29 +0000 (0:00:01.016) 0:43:52.693 ******** 2026-03-26 05:47:04.520898 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:47:04.520909 | orchestrator | 2026-03-26 05:47:04.520920 | orchestrator | TASK [Activate scanned ceph-disk osds and migrate to ceph-volume if deploying nautilus] *** 2026-03-26 05:47:04.520930 | orchestrator | Thursday 26 March 2026 05:46:29 +0000 (0:00:00.750) 0:43:53.443 ******** 2026-03-26 05:47:04.520941 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:47:04.520952 | orchestrator | 2026-03-26 05:47:04.520962 | orchestrator | TASK [Waiting for clean pgs...] ************************************************ 2026-03-26 05:47:04.520973 | orchestrator | Thursday 26 March 2026 05:46:30 +0000 (0:00:00.789) 0:43:54.232 ******** 2026-03-26 05:47:04.520984 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for clean pgs... (600 retries left). 2026-03-26 05:47:04.520995 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for clean pgs... (599 retries left). 2026-03-26 05:47:04.521006 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for clean pgs... (598 retries left). 2026-03-26 05:47:04.521034 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for clean pgs... (597 retries left). 2026-03-26 05:47:04.521046 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-03-26 05:47:04.521057 | orchestrator | 2026-03-26 05:47:04.521067 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-26 05:47:04.521078 | orchestrator | Thursday 26 March 2026 05:46:44 +0000 (0:00:13.720) 0:44:07.953 ******** 2026-03-26 05:47:04.521089 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:47:04.521099 | orchestrator | 2026-03-26 05:47:04.521110 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-03-26 05:47:04.521121 | orchestrator | Thursday 26 March 2026 05:46:45 +0000 (0:00:00.916) 0:44:08.870 ******** 2026-03-26 05:47:04.521132 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:47:04.521142 | orchestrator | 2026-03-26 05:47:04.521153 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-03-26 05:47:04.521163 | orchestrator | Thursday 26 March 2026 05:46:46 +0000 (0:00:00.788) 0:44:09.659 ******** 2026-03-26 05:47:04.521174 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:47:04.521185 | orchestrator | 2026-03-26 05:47:04.521195 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-03-26 05:47:04.521206 | orchestrator | Thursday 26 March 2026 05:46:46 +0000 (0:00:00.767) 0:44:10.426 ******** 2026-03-26 05:47:04.521216 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:47:04.521227 | orchestrator | 2026-03-26 05:47:04.521238 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-03-26 05:47:04.521248 | orchestrator | Thursday 26 March 2026 05:46:47 +0000 (0:00:00.770) 0:44:11.197 ******** 2026-03-26 05:47:04.521259 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:47:04.521269 | orchestrator | 2026-03-26 05:47:04.521280 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-03-26 05:47:04.521290 | orchestrator | Thursday 26 March 2026 05:46:48 +0000 (0:00:00.788) 0:44:11.985 ******** 2026-03-26 05:47:04.521301 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:47:04.521321 | orchestrator | 2026-03-26 05:47:04.521332 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-03-26 05:47:04.521343 | orchestrator | Thursday 26 March 2026 05:46:49 +0000 (0:00:00.769) 0:44:12.755 ******** 2026-03-26 05:47:04.521353 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:47:04.521364 | orchestrator | 2026-03-26 05:47:04.521374 | orchestrator | PLAY [Upgrade ceph osds cluster] *********************************************** 2026-03-26 05:47:04.521385 | orchestrator | 2026-03-26 05:47:04.521395 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-26 05:47:04.521411 | orchestrator | Thursday 26 March 2026 05:46:50 +0000 (0:00:00.960) 0:44:13.715 ******** 2026-03-26 05:47:04.521422 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-5 2026-03-26 05:47:04.521433 | orchestrator | 2026-03-26 05:47:04.521443 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-26 05:47:04.521454 | orchestrator | Thursday 26 March 2026 05:46:51 +0000 (0:00:01.338) 0:44:15.053 ******** 2026-03-26 05:47:04.521464 | orchestrator | ok: [testbed-node-5] 2026-03-26 05:47:04.521475 | orchestrator | 2026-03-26 05:47:04.521486 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-26 05:47:04.521496 | orchestrator | Thursday 26 March 2026 05:46:52 +0000 (0:00:01.454) 0:44:16.508 ******** 2026-03-26 05:47:04.521507 | orchestrator | ok: [testbed-node-5] 2026-03-26 05:47:04.521517 | orchestrator | 2026-03-26 05:47:04.521528 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-26 05:47:04.521538 | orchestrator | Thursday 26 March 2026 05:46:53 +0000 (0:00:01.108) 0:44:17.617 ******** 2026-03-26 05:47:04.521549 | orchestrator | ok: [testbed-node-5] 2026-03-26 05:47:04.521559 | orchestrator | 2026-03-26 05:47:04.521570 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-26 05:47:04.521581 | orchestrator | Thursday 26 March 2026 05:46:55 +0000 (0:00:01.485) 0:44:19.102 ******** 2026-03-26 05:47:04.521591 | orchestrator | ok: [testbed-node-5] 2026-03-26 05:47:04.521602 | orchestrator | 2026-03-26 05:47:04.521613 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-26 05:47:04.521623 | orchestrator | Thursday 26 March 2026 05:46:56 +0000 (0:00:01.191) 0:44:20.294 ******** 2026-03-26 05:47:04.521634 | orchestrator | ok: [testbed-node-5] 2026-03-26 05:47:04.521644 | orchestrator | 2026-03-26 05:47:04.521655 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-26 05:47:04.521666 | orchestrator | Thursday 26 March 2026 05:46:57 +0000 (0:00:01.128) 0:44:21.423 ******** 2026-03-26 05:47:04.521676 | orchestrator | ok: [testbed-node-5] 2026-03-26 05:47:04.521687 | orchestrator | 2026-03-26 05:47:04.521698 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-26 05:47:04.521708 | orchestrator | Thursday 26 March 2026 05:46:58 +0000 (0:00:01.167) 0:44:22.590 ******** 2026-03-26 05:47:04.521719 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:47:04.521729 | orchestrator | 2026-03-26 05:47:04.521740 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-26 05:47:04.521750 | orchestrator | Thursday 26 March 2026 05:47:00 +0000 (0:00:01.132) 0:44:23.723 ******** 2026-03-26 05:47:04.521761 | orchestrator | ok: [testbed-node-5] 2026-03-26 05:47:04.521772 | orchestrator | 2026-03-26 05:47:04.521782 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-26 05:47:04.521793 | orchestrator | Thursday 26 March 2026 05:47:01 +0000 (0:00:01.148) 0:44:24.871 ******** 2026-03-26 05:47:04.521803 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-26 05:47:04.521814 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-26 05:47:04.521824 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-26 05:47:04.521835 | orchestrator | 2026-03-26 05:47:04.521846 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-26 05:47:04.521856 | orchestrator | Thursday 26 March 2026 05:47:03 +0000 (0:00:02.057) 0:44:26.929 ******** 2026-03-26 05:47:04.521893 | orchestrator | ok: [testbed-node-5] 2026-03-26 05:47:04.521904 | orchestrator | 2026-03-26 05:47:04.521922 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-26 05:47:30.042875 | orchestrator | Thursday 26 March 2026 05:47:04 +0000 (0:00:01.237) 0:44:28.166 ******** 2026-03-26 05:47:30.042987 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-26 05:47:30.042994 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-26 05:47:30.042999 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-26 05:47:30.043003 | orchestrator | 2026-03-26 05:47:30.043008 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-26 05:47:30.043012 | orchestrator | Thursday 26 March 2026 05:47:07 +0000 (0:00:03.293) 0:44:31.460 ******** 2026-03-26 05:47:30.043017 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-26 05:47:30.043022 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-26 05:47:30.043025 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-26 05:47:30.043029 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:47:30.043034 | orchestrator | 2026-03-26 05:47:30.043038 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-26 05:47:30.043042 | orchestrator | Thursday 26 March 2026 05:47:09 +0000 (0:00:01.791) 0:44:33.252 ******** 2026-03-26 05:47:30.043047 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-26 05:47:30.043054 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-26 05:47:30.043058 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-26 05:47:30.043078 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:47:30.043082 | orchestrator | 2026-03-26 05:47:30.043086 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-26 05:47:30.043090 | orchestrator | Thursday 26 March 2026 05:47:11 +0000 (0:00:01.620) 0:44:34.872 ******** 2026-03-26 05:47:30.043097 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-26 05:47:30.043104 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-26 05:47:30.043108 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-26 05:47:30.043112 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:47:30.043133 | orchestrator | 2026-03-26 05:47:30.043138 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-26 05:47:30.043141 | orchestrator | Thursday 26 March 2026 05:47:12 +0000 (0:00:01.214) 0:44:36.087 ******** 2026-03-26 05:47:30.043148 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': 'de9c3b4c4c57', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-26 05:47:05.394004', 'end': '2026-03-26 05:47:05.430712', 'delta': '0:00:00.036708', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['de9c3b4c4c57'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-26 05:47:30.043169 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': 'd66b87272f8e', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-26 05:47:05.989006', 'end': '2026-03-26 05:47:06.037018', 'delta': '0:00:00.048012', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['d66b87272f8e'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-26 05:47:30.043174 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': 'b850f8fd4697', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-26 05:47:06.596719', 'end': '2026-03-26 05:47:06.655213', 'delta': '0:00:00.058494', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['b850f8fd4697'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-26 05:47:30.043178 | orchestrator | 2026-03-26 05:47:30.043182 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-26 05:47:30.043186 | orchestrator | Thursday 26 March 2026 05:47:13 +0000 (0:00:01.178) 0:44:37.265 ******** 2026-03-26 05:47:30.043193 | orchestrator | ok: [testbed-node-5] 2026-03-26 05:47:30.043198 | orchestrator | 2026-03-26 05:47:30.043202 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-26 05:47:30.043206 | orchestrator | Thursday 26 March 2026 05:47:14 +0000 (0:00:01.258) 0:44:38.524 ******** 2026-03-26 05:47:30.043210 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:47:30.043214 | orchestrator | 2026-03-26 05:47:30.043217 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-26 05:47:30.043221 | orchestrator | Thursday 26 March 2026 05:47:16 +0000 (0:00:01.293) 0:44:39.818 ******** 2026-03-26 05:47:30.043225 | orchestrator | ok: [testbed-node-5] 2026-03-26 05:47:30.043229 | orchestrator | 2026-03-26 05:47:30.043233 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-26 05:47:30.043236 | orchestrator | Thursday 26 March 2026 05:47:17 +0000 (0:00:01.139) 0:44:40.957 ******** 2026-03-26 05:47:30.043240 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-26 05:47:30.043244 | orchestrator | 2026-03-26 05:47:30.043248 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-26 05:47:30.043256 | orchestrator | Thursday 26 March 2026 05:47:19 +0000 (0:00:02.028) 0:44:42.986 ******** 2026-03-26 05:47:30.043260 | orchestrator | ok: [testbed-node-5] 2026-03-26 05:47:30.043264 | orchestrator | 2026-03-26 05:47:30.043268 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-26 05:47:30.043272 | orchestrator | Thursday 26 March 2026 05:47:20 +0000 (0:00:01.319) 0:44:44.306 ******** 2026-03-26 05:47:30.043276 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:47:30.043279 | orchestrator | 2026-03-26 05:47:30.043283 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-26 05:47:30.043287 | orchestrator | Thursday 26 March 2026 05:47:21 +0000 (0:00:01.155) 0:44:45.462 ******** 2026-03-26 05:47:30.043291 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:47:30.043295 | orchestrator | 2026-03-26 05:47:30.043298 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-26 05:47:30.043302 | orchestrator | Thursday 26 March 2026 05:47:23 +0000 (0:00:01.251) 0:44:46.713 ******** 2026-03-26 05:47:30.043306 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:47:30.043310 | orchestrator | 2026-03-26 05:47:30.043313 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-26 05:47:30.043317 | orchestrator | Thursday 26 March 2026 05:47:24 +0000 (0:00:01.141) 0:44:47.854 ******** 2026-03-26 05:47:30.043321 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:47:30.043325 | orchestrator | 2026-03-26 05:47:30.043329 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-26 05:47:30.043333 | orchestrator | Thursday 26 March 2026 05:47:25 +0000 (0:00:01.154) 0:44:49.009 ******** 2026-03-26 05:47:30.043336 | orchestrator | ok: [testbed-node-5] 2026-03-26 05:47:30.043340 | orchestrator | 2026-03-26 05:47:30.043344 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-26 05:47:30.043348 | orchestrator | Thursday 26 March 2026 05:47:26 +0000 (0:00:01.234) 0:44:50.244 ******** 2026-03-26 05:47:30.043352 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:47:30.043355 | orchestrator | 2026-03-26 05:47:30.043359 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-26 05:47:30.043363 | orchestrator | Thursday 26 March 2026 05:47:27 +0000 (0:00:01.124) 0:44:51.368 ******** 2026-03-26 05:47:30.043367 | orchestrator | ok: [testbed-node-5] 2026-03-26 05:47:30.043370 | orchestrator | 2026-03-26 05:47:30.043374 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-26 05:47:30.043378 | orchestrator | Thursday 26 March 2026 05:47:28 +0000 (0:00:01.176) 0:44:52.544 ******** 2026-03-26 05:47:30.043382 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:47:30.043386 | orchestrator | 2026-03-26 05:47:31.462890 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-26 05:47:31.463030 | orchestrator | Thursday 26 March 2026 05:47:30 +0000 (0:00:01.146) 0:44:53.691 ******** 2026-03-26 05:47:31.463044 | orchestrator | ok: [testbed-node-5] 2026-03-26 05:47:31.463053 | orchestrator | 2026-03-26 05:47:31.463060 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-26 05:47:31.463067 | orchestrator | Thursday 26 March 2026 05:47:31 +0000 (0:00:01.186) 0:44:54.878 ******** 2026-03-26 05:47:31.463114 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:47:31.463131 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--1fd8de68--da37--5e01--9bf2--5a04fcdcd771-osd--block--1fd8de68--da37--5e01--9bf2--5a04fcdcd771', 'dm-uuid-LVM-Q7trkX6T9bQrenPM1EuezeEWG2QB7ffx0bNZRnQ3R81VwJTdPWktYtRAGSsXVFlp'], 'uuids': ['958c3d71-9b3b-484b-8cbf-f174ba1f6fac'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '47760649', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['0bNZRn-Q3R8-1VwJ-TdPW-ktYt-RAGS-sXVFlp']}})  2026-03-26 05:47:31.463188 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8ddd7966-84e6-4951-8a08-7b4fb4af2bd2', 'scsi-SQEMU_QEMU_HARDDISK_8ddd7966-84e6-4951-8a08-7b4fb4af2bd2'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '8ddd7966', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-26 05:47:31.463198 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-FriUOI-gUEr-kmP0-nYC7-MoO0-ng3W-Ej90o7', 'scsi-0QEMU_QEMU_HARDDISK_943c088c-5b56-4173-ab64-ec81e1cc816d', 'scsi-SQEMU_QEMU_HARDDISK_943c088c-5b56-4173-ab64-ec81e1cc816d'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '943c088c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--83c4def8--4703--5f7c--9549--7666ff9f2b66-osd--block--83c4def8--4703--5f7c--9549--7666ff9f2b66']}})  2026-03-26 05:47:31.463205 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:47:31.463213 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:47:31.463242 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-26-01-38-15-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-26 05:47:31.463251 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:47:31.463258 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-A6zrFI-A863-mhHt-Ip5p-UFeD-Hxho-mhuceD', 'dm-uuid-CRYPT-LUKS2-4b88786507c84424981e8c33baf61cbe-A6zrFI-A863-mhHt-Ip5p-UFeD-Hxho-mhuceD'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-26 05:47:31.463274 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:47:31.463281 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--83c4def8--4703--5f7c--9549--7666ff9f2b66-osd--block--83c4def8--4703--5f7c--9549--7666ff9f2b66', 'dm-uuid-LVM-DoNgv1c108dy4eu1pvS7TOCWbuA3UXv0A6zrFIA863mhHtIp5pUFeDHxhomhuceD'], 'uuids': ['4b887865-07c8-4424-981e-8c33baf61cbe'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '943c088c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['A6zrFI-A863-mhHt-Ip5p-UFeD-Hxho-mhuceD']}})  2026-03-26 05:47:31.463288 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-xgZSV6-0wfE-zGZo-XmXe-xuiN-RWM0-U4VPgB', 'scsi-0QEMU_QEMU_HARDDISK_47760649-09e9-4ed8-8303-e5ee473a8102', 'scsi-SQEMU_QEMU_HARDDISK_47760649-09e9-4ed8-8303-e5ee473a8102'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '47760649', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--1fd8de68--da37--5e01--9bf2--5a04fcdcd771-osd--block--1fd8de68--da37--5e01--9bf2--5a04fcdcd771']}})  2026-03-26 05:47:31.463295 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:47:31.463316 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4fa924fa-33d9-43ce-b208-159d6f6ab539', 'scsi-SQEMU_QEMU_HARDDISK_4fa924fa-33d9-43ce-b208-159d6f6ab539'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '4fa924fa', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4fa924fa-33d9-43ce-b208-159d6f6ab539-part16', 'scsi-SQEMU_QEMU_HARDDISK_4fa924fa-33d9-43ce-b208-159d6f6ab539-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4fa924fa-33d9-43ce-b208-159d6f6ab539-part14', 'scsi-SQEMU_QEMU_HARDDISK_4fa924fa-33d9-43ce-b208-159d6f6ab539-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4fa924fa-33d9-43ce-b208-159d6f6ab539-part15', 'scsi-SQEMU_QEMU_HARDDISK_4fa924fa-33d9-43ce-b208-159d6f6ab539-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4fa924fa-33d9-43ce-b208-159d6f6ab539-part1', 'scsi-SQEMU_QEMU_HARDDISK_4fa924fa-33d9-43ce-b208-159d6f6ab539-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-26 05:47:32.887036 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:47:32.887174 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:47:32.887193 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-0bNZRn-Q3R8-1VwJ-TdPW-ktYt-RAGS-sXVFlp', 'dm-uuid-CRYPT-LUKS2-958c3d719b3b484b8cbff174ba1f6fac-0bNZRn-Q3R8-1VwJ-TdPW-ktYt-RAGS-sXVFlp'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-26 05:47:32.887209 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:47:32.887222 | orchestrator | 2026-03-26 05:47:32.887235 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-26 05:47:32.887248 | orchestrator | Thursday 26 March 2026 05:47:32 +0000 (0:00:01.422) 0:44:56.301 ******** 2026-03-26 05:47:32.887260 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:47:32.887274 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--1fd8de68--da37--5e01--9bf2--5a04fcdcd771-osd--block--1fd8de68--da37--5e01--9bf2--5a04fcdcd771', 'dm-uuid-LVM-Q7trkX6T9bQrenPM1EuezeEWG2QB7ffx0bNZRnQ3R81VwJTdPWktYtRAGSsXVFlp'], 'uuids': ['958c3d71-9b3b-484b-8cbf-f174ba1f6fac'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '47760649', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['0bNZRn-Q3R8-1VwJ-TdPW-ktYt-RAGS-sXVFlp']}}, 'ansible_loop_var': 'item'})  2026-03-26 05:47:32.887315 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8ddd7966-84e6-4951-8a08-7b4fb4af2bd2', 'scsi-SQEMU_QEMU_HARDDISK_8ddd7966-84e6-4951-8a08-7b4fb4af2bd2'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '8ddd7966', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:47:32.887366 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-FriUOI-gUEr-kmP0-nYC7-MoO0-ng3W-Ej90o7', 'scsi-0QEMU_QEMU_HARDDISK_943c088c-5b56-4173-ab64-ec81e1cc816d', 'scsi-SQEMU_QEMU_HARDDISK_943c088c-5b56-4173-ab64-ec81e1cc816d'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '943c088c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--83c4def8--4703--5f7c--9549--7666ff9f2b66-osd--block--83c4def8--4703--5f7c--9549--7666ff9f2b66']}}, 'ansible_loop_var': 'item'})  2026-03-26 05:47:32.887383 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:47:32.887395 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:47:32.887407 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-26-01-38-15-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:47:32.887428 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:47:32.887452 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-A6zrFI-A863-mhHt-Ip5p-UFeD-Hxho-mhuceD', 'dm-uuid-CRYPT-LUKS2-4b88786507c84424981e8c33baf61cbe-A6zrFI-A863-mhHt-Ip5p-UFeD-Hxho-mhuceD'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:47:39.223630 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:47:39.223785 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--83c4def8--4703--5f7c--9549--7666ff9f2b66-osd--block--83c4def8--4703--5f7c--9549--7666ff9f2b66', 'dm-uuid-LVM-DoNgv1c108dy4eu1pvS7TOCWbuA3UXv0A6zrFIA863mhHtIp5pUFeDHxhomhuceD'], 'uuids': ['4b887865-07c8-4424-981e-8c33baf61cbe'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '943c088c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['A6zrFI-A863-mhHt-Ip5p-UFeD-Hxho-mhuceD']}}, 'ansible_loop_var': 'item'})  2026-03-26 05:47:39.223804 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-xgZSV6-0wfE-zGZo-XmXe-xuiN-RWM0-U4VPgB', 'scsi-0QEMU_QEMU_HARDDISK_47760649-09e9-4ed8-8303-e5ee473a8102', 'scsi-SQEMU_QEMU_HARDDISK_47760649-09e9-4ed8-8303-e5ee473a8102'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '47760649', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--1fd8de68--da37--5e01--9bf2--5a04fcdcd771-osd--block--1fd8de68--da37--5e01--9bf2--5a04fcdcd771']}}, 'ansible_loop_var': 'item'})  2026-03-26 05:47:39.223821 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:47:39.223993 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4fa924fa-33d9-43ce-b208-159d6f6ab539', 'scsi-SQEMU_QEMU_HARDDISK_4fa924fa-33d9-43ce-b208-159d6f6ab539'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '4fa924fa', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4fa924fa-33d9-43ce-b208-159d6f6ab539-part16', 'scsi-SQEMU_QEMU_HARDDISK_4fa924fa-33d9-43ce-b208-159d6f6ab539-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4fa924fa-33d9-43ce-b208-159d6f6ab539-part14', 'scsi-SQEMU_QEMU_HARDDISK_4fa924fa-33d9-43ce-b208-159d6f6ab539-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4fa924fa-33d9-43ce-b208-159d6f6ab539-part15', 'scsi-SQEMU_QEMU_HARDDISK_4fa924fa-33d9-43ce-b208-159d6f6ab539-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4fa924fa-33d9-43ce-b208-159d6f6ab539-part1', 'scsi-SQEMU_QEMU_HARDDISK_4fa924fa-33d9-43ce-b208-159d6f6ab539-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:47:39.224044 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:47:39.224059 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:47:39.224081 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-0bNZRn-Q3R8-1VwJ-TdPW-ktYt-RAGS-sXVFlp', 'dm-uuid-CRYPT-LUKS2-958c3d719b3b484b8cbff174ba1f6fac-0bNZRn-Q3R8-1VwJ-TdPW-ktYt-RAGS-sXVFlp'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:47:39.224094 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:47:39.224109 | orchestrator | 2026-03-26 05:47:39.224124 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-26 05:47:39.224138 | orchestrator | Thursday 26 March 2026 05:47:34 +0000 (0:00:01.415) 0:44:57.717 ******** 2026-03-26 05:47:39.224150 | orchestrator | ok: [testbed-node-5] 2026-03-26 05:47:39.224164 | orchestrator | 2026-03-26 05:47:39.224177 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-26 05:47:39.224189 | orchestrator | Thursday 26 March 2026 05:47:36 +0000 (0:00:02.516) 0:45:00.233 ******** 2026-03-26 05:47:39.224202 | orchestrator | ok: [testbed-node-5] 2026-03-26 05:47:39.224215 | orchestrator | 2026-03-26 05:47:39.224227 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-26 05:47:39.224239 | orchestrator | Thursday 26 March 2026 05:47:37 +0000 (0:00:01.138) 0:45:01.372 ******** 2026-03-26 05:47:39.224251 | orchestrator | ok: [testbed-node-5] 2026-03-26 05:47:39.224263 | orchestrator | 2026-03-26 05:47:39.224281 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-26 05:47:39.224303 | orchestrator | Thursday 26 March 2026 05:47:39 +0000 (0:00:01.502) 0:45:02.874 ******** 2026-03-26 05:48:22.369431 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:48:22.369575 | orchestrator | 2026-03-26 05:48:22.369590 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-26 05:48:22.369602 | orchestrator | Thursday 26 March 2026 05:47:40 +0000 (0:00:01.151) 0:45:04.026 ******** 2026-03-26 05:48:22.369613 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:48:22.369623 | orchestrator | 2026-03-26 05:48:22.369633 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-26 05:48:22.369642 | orchestrator | Thursday 26 March 2026 05:47:41 +0000 (0:00:01.240) 0:45:05.267 ******** 2026-03-26 05:48:22.369652 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:48:22.369662 | orchestrator | 2026-03-26 05:48:22.369671 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-26 05:48:22.369680 | orchestrator | Thursday 26 March 2026 05:47:42 +0000 (0:00:01.173) 0:45:06.440 ******** 2026-03-26 05:48:22.369691 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-03-26 05:48:22.369701 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-03-26 05:48:22.369711 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-03-26 05:48:22.369720 | orchestrator | 2026-03-26 05:48:22.369729 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-26 05:48:22.369739 | orchestrator | Thursday 26 March 2026 05:47:44 +0000 (0:00:02.128) 0:45:08.569 ******** 2026-03-26 05:48:22.369748 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-26 05:48:22.369758 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-26 05:48:22.369767 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-26 05:48:22.369777 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:48:22.369786 | orchestrator | 2026-03-26 05:48:22.369796 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-26 05:48:22.369862 | orchestrator | Thursday 26 March 2026 05:47:46 +0000 (0:00:01.204) 0:45:09.774 ******** 2026-03-26 05:48:22.369873 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-5 2026-03-26 05:48:22.369884 | orchestrator | 2026-03-26 05:48:22.369894 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-26 05:48:22.369906 | orchestrator | Thursday 26 March 2026 05:47:47 +0000 (0:00:01.160) 0:45:10.934 ******** 2026-03-26 05:48:22.369915 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:48:22.369925 | orchestrator | 2026-03-26 05:48:22.369934 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-26 05:48:22.369943 | orchestrator | Thursday 26 March 2026 05:47:48 +0000 (0:00:01.141) 0:45:12.076 ******** 2026-03-26 05:48:22.369952 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:48:22.369962 | orchestrator | 2026-03-26 05:48:22.369971 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-26 05:48:22.369980 | orchestrator | Thursday 26 March 2026 05:47:49 +0000 (0:00:01.115) 0:45:13.192 ******** 2026-03-26 05:48:22.369989 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:48:22.369998 | orchestrator | 2026-03-26 05:48:22.370008 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-26 05:48:22.370079 | orchestrator | Thursday 26 March 2026 05:47:50 +0000 (0:00:01.168) 0:45:14.361 ******** 2026-03-26 05:48:22.370088 | orchestrator | ok: [testbed-node-5] 2026-03-26 05:48:22.370098 | orchestrator | 2026-03-26 05:48:22.370108 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-26 05:48:22.370118 | orchestrator | Thursday 26 March 2026 05:47:51 +0000 (0:00:01.257) 0:45:15.618 ******** 2026-03-26 05:48:22.370128 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-26 05:48:22.370138 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-26 05:48:22.370147 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-26 05:48:22.370156 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:48:22.370166 | orchestrator | 2026-03-26 05:48:22.370175 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-26 05:48:22.370184 | orchestrator | Thursday 26 March 2026 05:47:53 +0000 (0:00:01.387) 0:45:17.005 ******** 2026-03-26 05:48:22.370194 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-26 05:48:22.370203 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-26 05:48:22.370212 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-26 05:48:22.370222 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:48:22.370231 | orchestrator | 2026-03-26 05:48:22.370240 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-26 05:48:22.370250 | orchestrator | Thursday 26 March 2026 05:47:54 +0000 (0:00:01.362) 0:45:18.368 ******** 2026-03-26 05:48:22.370259 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-26 05:48:22.370268 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-26 05:48:22.370278 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-26 05:48:22.370287 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:48:22.370296 | orchestrator | 2026-03-26 05:48:22.370306 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-26 05:48:22.370315 | orchestrator | Thursday 26 March 2026 05:47:56 +0000 (0:00:01.417) 0:45:19.786 ******** 2026-03-26 05:48:22.370324 | orchestrator | ok: [testbed-node-5] 2026-03-26 05:48:22.370333 | orchestrator | 2026-03-26 05:48:22.370343 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-26 05:48:22.370352 | orchestrator | Thursday 26 March 2026 05:47:57 +0000 (0:00:01.184) 0:45:20.970 ******** 2026-03-26 05:48:22.370362 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-26 05:48:22.370371 | orchestrator | 2026-03-26 05:48:22.370381 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-26 05:48:22.370415 | orchestrator | Thursday 26 March 2026 05:47:59 +0000 (0:00:01.756) 0:45:22.727 ******** 2026-03-26 05:48:22.370445 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-26 05:48:22.370456 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-26 05:48:22.370466 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-26 05:48:22.370475 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-26 05:48:22.370485 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-26 05:48:22.370494 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-03-26 05:48:22.370504 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-26 05:48:22.370513 | orchestrator | 2026-03-26 05:48:22.370522 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-26 05:48:22.370532 | orchestrator | Thursday 26 March 2026 05:48:01 +0000 (0:00:02.170) 0:45:24.898 ******** 2026-03-26 05:48:22.370541 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-26 05:48:22.370550 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-26 05:48:22.370560 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-26 05:48:22.370569 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-26 05:48:22.370578 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-26 05:48:22.370588 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-03-26 05:48:22.370597 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-26 05:48:22.370606 | orchestrator | 2026-03-26 05:48:22.370616 | orchestrator | TASK [Get osd numbers - non container] ***************************************** 2026-03-26 05:48:22.370625 | orchestrator | Thursday 26 March 2026 05:48:03 +0000 (0:00:02.264) 0:45:27.162 ******** 2026-03-26 05:48:22.370634 | orchestrator | ok: [testbed-node-5] 2026-03-26 05:48:22.370644 | orchestrator | 2026-03-26 05:48:22.370653 | orchestrator | TASK [Set num_osds] ************************************************************ 2026-03-26 05:48:22.370662 | orchestrator | Thursday 26 March 2026 05:48:04 +0000 (0:00:01.150) 0:45:28.312 ******** 2026-03-26 05:48:22.370672 | orchestrator | ok: [testbed-node-5] 2026-03-26 05:48:22.370681 | orchestrator | 2026-03-26 05:48:22.370690 | orchestrator | TASK [Set_fact container_exec_cmd_osd] ***************************************** 2026-03-26 05:48:22.370700 | orchestrator | Thursday 26 March 2026 05:48:05 +0000 (0:00:00.779) 0:45:29.092 ******** 2026-03-26 05:48:22.370709 | orchestrator | ok: [testbed-node-5] 2026-03-26 05:48:22.370719 | orchestrator | 2026-03-26 05:48:22.370728 | orchestrator | TASK [Stop ceph osd] *********************************************************** 2026-03-26 05:48:22.370737 | orchestrator | Thursday 26 March 2026 05:48:06 +0000 (0:00:00.850) 0:45:29.943 ******** 2026-03-26 05:48:22.370747 | orchestrator | changed: [testbed-node-5] => (item=1) 2026-03-26 05:48:22.370756 | orchestrator | changed: [testbed-node-5] => (item=4) 2026-03-26 05:48:22.370766 | orchestrator | 2026-03-26 05:48:22.370775 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-26 05:48:22.370785 | orchestrator | Thursday 26 March 2026 05:48:10 +0000 (0:00:04.686) 0:45:34.630 ******** 2026-03-26 05:48:22.370794 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-5 2026-03-26 05:48:22.370804 | orchestrator | 2026-03-26 05:48:22.370813 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-26 05:48:22.370837 | orchestrator | Thursday 26 March 2026 05:48:12 +0000 (0:00:01.150) 0:45:35.780 ******** 2026-03-26 05:48:22.370847 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-5 2026-03-26 05:48:22.370863 | orchestrator | 2026-03-26 05:48:22.370873 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-26 05:48:22.370882 | orchestrator | Thursday 26 March 2026 05:48:13 +0000 (0:00:01.098) 0:45:36.879 ******** 2026-03-26 05:48:22.370892 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:48:22.370901 | orchestrator | 2026-03-26 05:48:22.370910 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-26 05:48:22.370920 | orchestrator | Thursday 26 March 2026 05:48:14 +0000 (0:00:01.112) 0:45:37.991 ******** 2026-03-26 05:48:22.370929 | orchestrator | ok: [testbed-node-5] 2026-03-26 05:48:22.370939 | orchestrator | 2026-03-26 05:48:22.370948 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-26 05:48:22.370958 | orchestrator | Thursday 26 March 2026 05:48:15 +0000 (0:00:01.503) 0:45:39.495 ******** 2026-03-26 05:48:22.370967 | orchestrator | ok: [testbed-node-5] 2026-03-26 05:48:22.370977 | orchestrator | 2026-03-26 05:48:22.370986 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-26 05:48:22.370996 | orchestrator | Thursday 26 March 2026 05:48:17 +0000 (0:00:01.614) 0:45:41.110 ******** 2026-03-26 05:48:22.371005 | orchestrator | ok: [testbed-node-5] 2026-03-26 05:48:22.371014 | orchestrator | 2026-03-26 05:48:22.371024 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-26 05:48:22.371033 | orchestrator | Thursday 26 March 2026 05:48:18 +0000 (0:00:01.536) 0:45:42.646 ******** 2026-03-26 05:48:22.371042 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:48:22.371052 | orchestrator | 2026-03-26 05:48:22.371061 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-26 05:48:22.371070 | orchestrator | Thursday 26 March 2026 05:48:20 +0000 (0:00:01.113) 0:45:43.760 ******** 2026-03-26 05:48:22.371080 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:48:22.371089 | orchestrator | 2026-03-26 05:48:22.371098 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-26 05:48:22.371113 | orchestrator | Thursday 26 March 2026 05:48:21 +0000 (0:00:01.121) 0:45:44.882 ******** 2026-03-26 05:48:22.371123 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:48:22.371132 | orchestrator | 2026-03-26 05:48:22.371147 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-26 05:49:02.283922 | orchestrator | Thursday 26 March 2026 05:48:22 +0000 (0:00:01.131) 0:45:46.014 ******** 2026-03-26 05:49:02.284046 | orchestrator | ok: [testbed-node-5] 2026-03-26 05:49:02.284064 | orchestrator | 2026-03-26 05:49:02.284076 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-26 05:49:02.284088 | orchestrator | Thursday 26 March 2026 05:48:23 +0000 (0:00:01.518) 0:45:47.533 ******** 2026-03-26 05:49:02.284099 | orchestrator | ok: [testbed-node-5] 2026-03-26 05:49:02.284110 | orchestrator | 2026-03-26 05:49:02.284121 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-26 05:49:02.284132 | orchestrator | Thursday 26 March 2026 05:48:25 +0000 (0:00:01.632) 0:45:49.165 ******** 2026-03-26 05:49:02.284143 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:49:02.284155 | orchestrator | 2026-03-26 05:49:02.284166 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-26 05:49:02.284176 | orchestrator | Thursday 26 March 2026 05:48:26 +0000 (0:00:00.787) 0:45:49.952 ******** 2026-03-26 05:49:02.284187 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:49:02.284199 | orchestrator | 2026-03-26 05:49:02.284210 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-26 05:49:02.284221 | orchestrator | Thursday 26 March 2026 05:48:27 +0000 (0:00:00.793) 0:45:50.746 ******** 2026-03-26 05:49:02.284232 | orchestrator | ok: [testbed-node-5] 2026-03-26 05:49:02.284242 | orchestrator | 2026-03-26 05:49:02.284253 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-26 05:49:02.284264 | orchestrator | Thursday 26 March 2026 05:48:27 +0000 (0:00:00.779) 0:45:51.525 ******** 2026-03-26 05:49:02.284275 | orchestrator | ok: [testbed-node-5] 2026-03-26 05:49:02.284309 | orchestrator | 2026-03-26 05:49:02.284320 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-26 05:49:02.284331 | orchestrator | Thursday 26 March 2026 05:48:28 +0000 (0:00:00.835) 0:45:52.361 ******** 2026-03-26 05:49:02.284342 | orchestrator | ok: [testbed-node-5] 2026-03-26 05:49:02.284353 | orchestrator | 2026-03-26 05:49:02.284363 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-26 05:49:02.284374 | orchestrator | Thursday 26 March 2026 05:48:29 +0000 (0:00:00.779) 0:45:53.140 ******** 2026-03-26 05:49:02.284385 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:49:02.284396 | orchestrator | 2026-03-26 05:49:02.284407 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-26 05:49:02.284420 | orchestrator | Thursday 26 March 2026 05:48:30 +0000 (0:00:00.744) 0:45:53.885 ******** 2026-03-26 05:49:02.284433 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:49:02.284445 | orchestrator | 2026-03-26 05:49:02.284457 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-26 05:49:02.284469 | orchestrator | Thursday 26 March 2026 05:48:30 +0000 (0:00:00.762) 0:45:54.647 ******** 2026-03-26 05:49:02.284481 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:49:02.284494 | orchestrator | 2026-03-26 05:49:02.284506 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-26 05:49:02.284519 | orchestrator | Thursday 26 March 2026 05:48:31 +0000 (0:00:00.816) 0:45:55.463 ******** 2026-03-26 05:49:02.284531 | orchestrator | ok: [testbed-node-5] 2026-03-26 05:49:02.284544 | orchestrator | 2026-03-26 05:49:02.284556 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-26 05:49:02.284569 | orchestrator | Thursday 26 March 2026 05:48:32 +0000 (0:00:00.819) 0:45:56.282 ******** 2026-03-26 05:49:02.284582 | orchestrator | ok: [testbed-node-5] 2026-03-26 05:49:02.284594 | orchestrator | 2026-03-26 05:49:02.284623 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-03-26 05:49:02.284646 | orchestrator | Thursday 26 March 2026 05:48:33 +0000 (0:00:00.790) 0:45:57.073 ******** 2026-03-26 05:49:02.284659 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:49:02.284671 | orchestrator | 2026-03-26 05:49:02.284684 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-03-26 05:49:02.284695 | orchestrator | Thursday 26 March 2026 05:48:34 +0000 (0:00:00.769) 0:45:57.842 ******** 2026-03-26 05:49:02.284706 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:49:02.284716 | orchestrator | 2026-03-26 05:49:02.284727 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-03-26 05:49:02.284738 | orchestrator | Thursday 26 March 2026 05:48:34 +0000 (0:00:00.768) 0:45:58.611 ******** 2026-03-26 05:49:02.284748 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:49:02.284759 | orchestrator | 2026-03-26 05:49:02.284770 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-03-26 05:49:02.284781 | orchestrator | Thursday 26 March 2026 05:48:35 +0000 (0:00:00.761) 0:45:59.373 ******** 2026-03-26 05:49:02.284791 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:49:02.284819 | orchestrator | 2026-03-26 05:49:02.284831 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-03-26 05:49:02.284842 | orchestrator | Thursday 26 March 2026 05:48:36 +0000 (0:00:00.770) 0:46:00.144 ******** 2026-03-26 05:49:02.284852 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:49:02.284863 | orchestrator | 2026-03-26 05:49:02.284874 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-03-26 05:49:02.284885 | orchestrator | Thursday 26 March 2026 05:48:37 +0000 (0:00:00.783) 0:46:00.928 ******** 2026-03-26 05:49:02.284896 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:49:02.284906 | orchestrator | 2026-03-26 05:49:02.284917 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-03-26 05:49:02.284928 | orchestrator | Thursday 26 March 2026 05:48:38 +0000 (0:00:00.769) 0:46:01.698 ******** 2026-03-26 05:49:02.284938 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:49:02.284957 | orchestrator | 2026-03-26 05:49:02.284968 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-03-26 05:49:02.284979 | orchestrator | Thursday 26 March 2026 05:48:38 +0000 (0:00:00.762) 0:46:02.461 ******** 2026-03-26 05:49:02.285005 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:49:02.285016 | orchestrator | 2026-03-26 05:49:02.285027 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-03-26 05:49:02.285038 | orchestrator | Thursday 26 March 2026 05:48:39 +0000 (0:00:00.783) 0:46:03.244 ******** 2026-03-26 05:49:02.285065 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:49:02.285077 | orchestrator | 2026-03-26 05:49:02.285088 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-03-26 05:49:02.285099 | orchestrator | Thursday 26 March 2026 05:48:40 +0000 (0:00:00.760) 0:46:04.005 ******** 2026-03-26 05:49:02.285110 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:49:02.285120 | orchestrator | 2026-03-26 05:49:02.285131 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-03-26 05:49:02.285142 | orchestrator | Thursday 26 March 2026 05:48:41 +0000 (0:00:00.915) 0:46:04.920 ******** 2026-03-26 05:49:02.285152 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:49:02.285163 | orchestrator | 2026-03-26 05:49:02.285174 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-03-26 05:49:02.285185 | orchestrator | Thursday 26 March 2026 05:48:42 +0000 (0:00:00.794) 0:46:05.715 ******** 2026-03-26 05:49:02.285195 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:49:02.285206 | orchestrator | 2026-03-26 05:49:02.285217 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-26 05:49:02.285227 | orchestrator | Thursday 26 March 2026 05:48:42 +0000 (0:00:00.747) 0:46:06.463 ******** 2026-03-26 05:49:02.285238 | orchestrator | ok: [testbed-node-5] 2026-03-26 05:49:02.285249 | orchestrator | 2026-03-26 05:49:02.285259 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-26 05:49:02.285270 | orchestrator | Thursday 26 March 2026 05:48:44 +0000 (0:00:01.632) 0:46:08.095 ******** 2026-03-26 05:49:02.285281 | orchestrator | ok: [testbed-node-5] 2026-03-26 05:49:02.285292 | orchestrator | 2026-03-26 05:49:02.285303 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-26 05:49:02.285313 | orchestrator | Thursday 26 March 2026 05:48:46 +0000 (0:00:01.902) 0:46:09.998 ******** 2026-03-26 05:49:02.285324 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-5 2026-03-26 05:49:02.285336 | orchestrator | 2026-03-26 05:49:02.285346 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-26 05:49:02.285357 | orchestrator | Thursday 26 March 2026 05:48:47 +0000 (0:00:01.129) 0:46:11.127 ******** 2026-03-26 05:49:02.285368 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:49:02.285379 | orchestrator | 2026-03-26 05:49:02.285389 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-26 05:49:02.285400 | orchestrator | Thursday 26 March 2026 05:48:48 +0000 (0:00:01.129) 0:46:12.256 ******** 2026-03-26 05:49:02.285411 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:49:02.285421 | orchestrator | 2026-03-26 05:49:02.285432 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-26 05:49:02.285443 | orchestrator | Thursday 26 March 2026 05:48:49 +0000 (0:00:01.132) 0:46:13.389 ******** 2026-03-26 05:49:02.285453 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-26 05:49:02.285464 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-26 05:49:02.285475 | orchestrator | 2026-03-26 05:49:02.285486 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-26 05:49:02.285497 | orchestrator | Thursday 26 March 2026 05:48:51 +0000 (0:00:01.803) 0:46:15.193 ******** 2026-03-26 05:49:02.285507 | orchestrator | ok: [testbed-node-5] 2026-03-26 05:49:02.285518 | orchestrator | 2026-03-26 05:49:02.285529 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-26 05:49:02.285546 | orchestrator | Thursday 26 March 2026 05:48:53 +0000 (0:00:01.466) 0:46:16.659 ******** 2026-03-26 05:49:02.285557 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:49:02.285568 | orchestrator | 2026-03-26 05:49:02.285579 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-26 05:49:02.285589 | orchestrator | Thursday 26 March 2026 05:48:54 +0000 (0:00:01.114) 0:46:17.774 ******** 2026-03-26 05:49:02.285600 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:49:02.285611 | orchestrator | 2026-03-26 05:49:02.285622 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-26 05:49:02.285632 | orchestrator | Thursday 26 March 2026 05:48:55 +0000 (0:00:00.894) 0:46:18.668 ******** 2026-03-26 05:49:02.285643 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:49:02.285654 | orchestrator | 2026-03-26 05:49:02.285665 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-26 05:49:02.285675 | orchestrator | Thursday 26 March 2026 05:48:55 +0000 (0:00:00.807) 0:46:19.476 ******** 2026-03-26 05:49:02.285686 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-5 2026-03-26 05:49:02.285697 | orchestrator | 2026-03-26 05:49:02.285708 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-26 05:49:02.285718 | orchestrator | Thursday 26 March 2026 05:48:56 +0000 (0:00:01.157) 0:46:20.633 ******** 2026-03-26 05:49:02.285729 | orchestrator | ok: [testbed-node-5] 2026-03-26 05:49:02.285740 | orchestrator | 2026-03-26 05:49:02.285751 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-26 05:49:02.285761 | orchestrator | Thursday 26 March 2026 05:48:58 +0000 (0:00:01.828) 0:46:22.463 ******** 2026-03-26 05:49:02.285772 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-26 05:49:02.285783 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-26 05:49:02.285794 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-26 05:49:02.285824 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:49:02.285835 | orchestrator | 2026-03-26 05:49:02.285847 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-26 05:49:02.285857 | orchestrator | Thursday 26 March 2026 05:48:59 +0000 (0:00:01.137) 0:46:23.601 ******** 2026-03-26 05:49:02.285873 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:49:02.285884 | orchestrator | 2026-03-26 05:49:02.285895 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-26 05:49:02.285906 | orchestrator | Thursday 26 March 2026 05:49:01 +0000 (0:00:01.164) 0:46:24.765 ******** 2026-03-26 05:49:02.285923 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:49:45.089456 | orchestrator | 2026-03-26 05:49:45.089578 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-26 05:49:45.089596 | orchestrator | Thursday 26 March 2026 05:49:02 +0000 (0:00:01.166) 0:46:25.932 ******** 2026-03-26 05:49:45.089608 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:49:45.089621 | orchestrator | 2026-03-26 05:49:45.089632 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-26 05:49:45.089643 | orchestrator | Thursday 26 March 2026 05:49:03 +0000 (0:00:01.132) 0:46:27.064 ******** 2026-03-26 05:49:45.089654 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:49:45.089665 | orchestrator | 2026-03-26 05:49:45.089676 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-26 05:49:45.089687 | orchestrator | Thursday 26 March 2026 05:49:04 +0000 (0:00:01.165) 0:46:28.230 ******** 2026-03-26 05:49:45.089698 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:49:45.089709 | orchestrator | 2026-03-26 05:49:45.089719 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-26 05:49:45.089730 | orchestrator | Thursday 26 March 2026 05:49:05 +0000 (0:00:00.788) 0:46:29.018 ******** 2026-03-26 05:49:45.089741 | orchestrator | ok: [testbed-node-5] 2026-03-26 05:49:45.089778 | orchestrator | 2026-03-26 05:49:45.089826 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-26 05:49:45.089839 | orchestrator | Thursday 26 March 2026 05:49:07 +0000 (0:00:02.070) 0:46:31.088 ******** 2026-03-26 05:49:45.089849 | orchestrator | ok: [testbed-node-5] 2026-03-26 05:49:45.089860 | orchestrator | 2026-03-26 05:49:45.089871 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-26 05:49:45.089881 | orchestrator | Thursday 26 March 2026 05:49:08 +0000 (0:00:00.772) 0:46:31.861 ******** 2026-03-26 05:49:45.089892 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-5 2026-03-26 05:49:45.089904 | orchestrator | 2026-03-26 05:49:45.089914 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-26 05:49:45.089925 | orchestrator | Thursday 26 March 2026 05:49:09 +0000 (0:00:01.276) 0:46:33.137 ******** 2026-03-26 05:49:45.089936 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:49:45.089946 | orchestrator | 2026-03-26 05:49:45.089957 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-26 05:49:45.089967 | orchestrator | Thursday 26 March 2026 05:49:10 +0000 (0:00:01.178) 0:46:34.315 ******** 2026-03-26 05:49:45.089978 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:49:45.089989 | orchestrator | 2026-03-26 05:49:45.089999 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-26 05:49:45.090010 | orchestrator | Thursday 26 March 2026 05:49:11 +0000 (0:00:01.136) 0:46:35.452 ******** 2026-03-26 05:49:45.090076 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:49:45.090087 | orchestrator | 2026-03-26 05:49:45.090098 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-26 05:49:45.090109 | orchestrator | Thursday 26 March 2026 05:49:12 +0000 (0:00:01.155) 0:46:36.608 ******** 2026-03-26 05:49:45.090120 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:49:45.090130 | orchestrator | 2026-03-26 05:49:45.090141 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-26 05:49:45.090152 | orchestrator | Thursday 26 March 2026 05:49:14 +0000 (0:00:01.147) 0:46:37.756 ******** 2026-03-26 05:49:45.090173 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:49:45.090184 | orchestrator | 2026-03-26 05:49:45.090195 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-26 05:49:45.090205 | orchestrator | Thursday 26 March 2026 05:49:15 +0000 (0:00:01.265) 0:46:39.021 ******** 2026-03-26 05:49:45.090216 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:49:45.090227 | orchestrator | 2026-03-26 05:49:45.090237 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-26 05:49:45.090248 | orchestrator | Thursday 26 March 2026 05:49:16 +0000 (0:00:01.162) 0:46:40.184 ******** 2026-03-26 05:49:45.090259 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:49:45.090270 | orchestrator | 2026-03-26 05:49:45.090280 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-26 05:49:45.090291 | orchestrator | Thursday 26 March 2026 05:49:17 +0000 (0:00:01.115) 0:46:41.300 ******** 2026-03-26 05:49:45.090302 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:49:45.090313 | orchestrator | 2026-03-26 05:49:45.090323 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-26 05:49:45.090334 | orchestrator | Thursday 26 March 2026 05:49:18 +0000 (0:00:01.140) 0:46:42.440 ******** 2026-03-26 05:49:45.090345 | orchestrator | ok: [testbed-node-5] 2026-03-26 05:49:45.090356 | orchestrator | 2026-03-26 05:49:45.090366 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-26 05:49:45.090377 | orchestrator | Thursday 26 March 2026 05:49:19 +0000 (0:00:00.804) 0:46:43.244 ******** 2026-03-26 05:49:45.090388 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-5 2026-03-26 05:49:45.090399 | orchestrator | 2026-03-26 05:49:45.090410 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-26 05:49:45.090430 | orchestrator | Thursday 26 March 2026 05:49:20 +0000 (0:00:01.109) 0:46:44.354 ******** 2026-03-26 05:49:45.090442 | orchestrator | ok: [testbed-node-5] => (item=/etc/ceph) 2026-03-26 05:49:45.090453 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/) 2026-03-26 05:49:45.090464 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-03-26 05:49:45.090474 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-03-26 05:49:45.090485 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-03-26 05:49:45.090496 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-03-26 05:49:45.090520 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-03-26 05:49:45.090532 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-03-26 05:49:45.090542 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-26 05:49:45.090571 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-26 05:49:45.090583 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-26 05:49:45.090594 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-26 05:49:45.090604 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-26 05:49:45.090616 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-26 05:49:45.090627 | orchestrator | ok: [testbed-node-5] => (item=/var/run/ceph) 2026-03-26 05:49:45.090637 | orchestrator | ok: [testbed-node-5] => (item=/var/log/ceph) 2026-03-26 05:49:45.090648 | orchestrator | 2026-03-26 05:49:45.090659 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-26 05:49:45.090670 | orchestrator | Thursday 26 March 2026 05:49:27 +0000 (0:00:06.443) 0:46:50.798 ******** 2026-03-26 05:49:45.090680 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-5 2026-03-26 05:49:45.090691 | orchestrator | 2026-03-26 05:49:45.090702 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-03-26 05:49:45.090712 | orchestrator | Thursday 26 March 2026 05:49:28 +0000 (0:00:01.159) 0:46:51.957 ******** 2026-03-26 05:49:45.090723 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-26 05:49:45.090735 | orchestrator | 2026-03-26 05:49:45.090747 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-03-26 05:49:45.090766 | orchestrator | Thursday 26 March 2026 05:49:29 +0000 (0:00:01.498) 0:46:53.456 ******** 2026-03-26 05:49:45.090817 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-26 05:49:45.090841 | orchestrator | 2026-03-26 05:49:45.090859 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-26 05:49:45.090876 | orchestrator | Thursday 26 March 2026 05:49:31 +0000 (0:00:01.587) 0:46:55.044 ******** 2026-03-26 05:49:45.090892 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:49:45.090908 | orchestrator | 2026-03-26 05:49:45.090925 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-26 05:49:45.090942 | orchestrator | Thursday 26 March 2026 05:49:32 +0000 (0:00:00.838) 0:46:55.883 ******** 2026-03-26 05:49:45.090961 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:49:45.090979 | orchestrator | 2026-03-26 05:49:45.090999 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-26 05:49:45.091023 | orchestrator | Thursday 26 March 2026 05:49:33 +0000 (0:00:00.797) 0:46:56.681 ******** 2026-03-26 05:49:45.091040 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:49:45.091051 | orchestrator | 2026-03-26 05:49:45.091061 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-26 05:49:45.091072 | orchestrator | Thursday 26 March 2026 05:49:33 +0000 (0:00:00.763) 0:46:57.444 ******** 2026-03-26 05:49:45.091083 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:49:45.091093 | orchestrator | 2026-03-26 05:49:45.091115 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-26 05:49:45.091125 | orchestrator | Thursday 26 March 2026 05:49:34 +0000 (0:00:00.798) 0:46:58.242 ******** 2026-03-26 05:49:45.091136 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:49:45.091146 | orchestrator | 2026-03-26 05:49:45.091157 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-26 05:49:45.091167 | orchestrator | Thursday 26 March 2026 05:49:35 +0000 (0:00:00.764) 0:46:59.007 ******** 2026-03-26 05:49:45.091178 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:49:45.091188 | orchestrator | 2026-03-26 05:49:45.091199 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-26 05:49:45.091209 | orchestrator | Thursday 26 March 2026 05:49:36 +0000 (0:00:00.757) 0:46:59.765 ******** 2026-03-26 05:49:45.091220 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:49:45.091230 | orchestrator | 2026-03-26 05:49:45.091240 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-26 05:49:45.091251 | orchestrator | Thursday 26 March 2026 05:49:36 +0000 (0:00:00.750) 0:47:00.515 ******** 2026-03-26 05:49:45.091262 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:49:45.091272 | orchestrator | 2026-03-26 05:49:45.091283 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-26 05:49:45.091293 | orchestrator | Thursday 26 March 2026 05:49:37 +0000 (0:00:00.771) 0:47:01.287 ******** 2026-03-26 05:49:45.091304 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:49:45.091314 | orchestrator | 2026-03-26 05:49:45.091325 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-26 05:49:45.091335 | orchestrator | Thursday 26 March 2026 05:49:38 +0000 (0:00:00.840) 0:47:02.128 ******** 2026-03-26 05:49:45.091346 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:49:45.091356 | orchestrator | 2026-03-26 05:49:45.091367 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-26 05:49:45.091377 | orchestrator | Thursday 26 March 2026 05:49:39 +0000 (0:00:00.786) 0:47:02.915 ******** 2026-03-26 05:49:45.091388 | orchestrator | ok: [testbed-node-5] 2026-03-26 05:49:45.091398 | orchestrator | 2026-03-26 05:49:45.091409 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-26 05:49:45.091419 | orchestrator | Thursday 26 March 2026 05:49:40 +0000 (0:00:00.848) 0:47:03.763 ******** 2026-03-26 05:49:45.091430 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] 2026-03-26 05:49:45.091440 | orchestrator | 2026-03-26 05:49:45.091458 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-26 05:49:45.091469 | orchestrator | Thursday 26 March 2026 05:49:44 +0000 (0:00:04.124) 0:47:07.888 ******** 2026-03-26 05:49:45.091490 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-26 05:50:27.373642 | orchestrator | 2026-03-26 05:50:27.373826 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-26 05:50:27.373853 | orchestrator | Thursday 26 March 2026 05:49:45 +0000 (0:00:00.850) 0:47:08.738 ******** 2026-03-26 05:50:27.373872 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}]) 2026-03-26 05:50:27.373887 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}]) 2026-03-26 05:50:27.373900 | orchestrator | 2026-03-26 05:50:27.373911 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-26 05:50:27.373945 | orchestrator | Thursday 26 March 2026 05:49:52 +0000 (0:00:07.356) 0:47:16.095 ******** 2026-03-26 05:50:27.373956 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:50:27.373968 | orchestrator | 2026-03-26 05:50:27.373979 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-26 05:50:27.373990 | orchestrator | Thursday 26 March 2026 05:49:53 +0000 (0:00:00.790) 0:47:16.885 ******** 2026-03-26 05:50:27.374000 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:50:27.374012 | orchestrator | 2026-03-26 05:50:27.374076 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-26 05:50:27.374089 | orchestrator | Thursday 26 March 2026 05:49:54 +0000 (0:00:00.790) 0:47:17.676 ******** 2026-03-26 05:50:27.374100 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:50:27.374110 | orchestrator | 2026-03-26 05:50:27.374121 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-26 05:50:27.374132 | orchestrator | Thursday 26 March 2026 05:49:54 +0000 (0:00:00.824) 0:47:18.501 ******** 2026-03-26 05:50:27.374143 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:50:27.374154 | orchestrator | 2026-03-26 05:50:27.374164 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-26 05:50:27.374176 | orchestrator | Thursday 26 March 2026 05:49:55 +0000 (0:00:00.816) 0:47:19.318 ******** 2026-03-26 05:50:27.374189 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:50:27.374201 | orchestrator | 2026-03-26 05:50:27.374214 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-26 05:50:27.374226 | orchestrator | Thursday 26 March 2026 05:49:56 +0000 (0:00:00.794) 0:47:20.113 ******** 2026-03-26 05:50:27.374239 | orchestrator | ok: [testbed-node-5] 2026-03-26 05:50:27.374253 | orchestrator | 2026-03-26 05:50:27.374265 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-26 05:50:27.374277 | orchestrator | Thursday 26 March 2026 05:49:57 +0000 (0:00:00.932) 0:47:21.045 ******** 2026-03-26 05:50:27.374289 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-26 05:50:27.374305 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-26 05:50:27.374318 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-26 05:50:27.374330 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:50:27.374342 | orchestrator | 2026-03-26 05:50:27.374354 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-26 05:50:27.374366 | orchestrator | Thursday 26 March 2026 05:49:58 +0000 (0:00:01.433) 0:47:22.478 ******** 2026-03-26 05:50:27.374378 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-26 05:50:27.374390 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-26 05:50:27.374402 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-26 05:50:27.374414 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:50:27.374426 | orchestrator | 2026-03-26 05:50:27.374439 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-26 05:50:27.374451 | orchestrator | Thursday 26 March 2026 05:50:00 +0000 (0:00:01.487) 0:47:23.966 ******** 2026-03-26 05:50:27.374463 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-26 05:50:27.374474 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-26 05:50:27.374484 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-26 05:50:27.374495 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:50:27.374506 | orchestrator | 2026-03-26 05:50:27.374517 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-26 05:50:27.374527 | orchestrator | Thursday 26 March 2026 05:50:01 +0000 (0:00:01.081) 0:47:25.048 ******** 2026-03-26 05:50:27.374538 | orchestrator | ok: [testbed-node-5] 2026-03-26 05:50:27.374549 | orchestrator | 2026-03-26 05:50:27.374560 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-26 05:50:27.374580 | orchestrator | Thursday 26 March 2026 05:50:02 +0000 (0:00:00.805) 0:47:25.854 ******** 2026-03-26 05:50:27.374591 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-26 05:50:27.374601 | orchestrator | 2026-03-26 05:50:27.374612 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-26 05:50:27.374623 | orchestrator | Thursday 26 March 2026 05:50:03 +0000 (0:00:01.035) 0:47:26.889 ******** 2026-03-26 05:50:27.374647 | orchestrator | changed: [testbed-node-5] 2026-03-26 05:50:27.374658 | orchestrator | 2026-03-26 05:50:27.374669 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-03-26 05:50:27.374680 | orchestrator | Thursday 26 March 2026 05:50:04 +0000 (0:00:01.481) 0:47:28.370 ******** 2026-03-26 05:50:27.374690 | orchestrator | ok: [testbed-node-5] 2026-03-26 05:50:27.374701 | orchestrator | 2026-03-26 05:50:27.374734 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-03-26 05:50:27.374753 | orchestrator | Thursday 26 March 2026 05:50:05 +0000 (0:00:00.794) 0:47:29.165 ******** 2026-03-26 05:50:27.374794 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-26 05:50:27.374814 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-26 05:50:27.374831 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-26 05:50:27.374849 | orchestrator | 2026-03-26 05:50:27.374865 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-03-26 05:50:27.374881 | orchestrator | Thursday 26 March 2026 05:50:06 +0000 (0:00:01.470) 0:47:30.635 ******** 2026-03-26 05:50:27.374899 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-5 2026-03-26 05:50:27.374917 | orchestrator | 2026-03-26 05:50:27.374935 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-03-26 05:50:27.374952 | orchestrator | Thursday 26 March 2026 05:50:08 +0000 (0:00:01.062) 0:47:31.698 ******** 2026-03-26 05:50:27.374971 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:50:27.374990 | orchestrator | 2026-03-26 05:50:27.375010 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-03-26 05:50:27.375027 | orchestrator | Thursday 26 March 2026 05:50:09 +0000 (0:00:01.124) 0:47:32.822 ******** 2026-03-26 05:50:27.375043 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:50:27.375054 | orchestrator | 2026-03-26 05:50:27.375064 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-03-26 05:50:27.375075 | orchestrator | Thursday 26 March 2026 05:50:10 +0000 (0:00:01.083) 0:47:33.905 ******** 2026-03-26 05:50:27.375085 | orchestrator | ok: [testbed-node-5] 2026-03-26 05:50:27.375096 | orchestrator | 2026-03-26 05:50:27.375106 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-03-26 05:50:27.375117 | orchestrator | Thursday 26 March 2026 05:50:11 +0000 (0:00:01.468) 0:47:35.374 ******** 2026-03-26 05:50:27.375127 | orchestrator | ok: [testbed-node-5] 2026-03-26 05:50:27.375138 | orchestrator | 2026-03-26 05:50:27.375148 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-03-26 05:50:27.375159 | orchestrator | Thursday 26 March 2026 05:50:12 +0000 (0:00:01.108) 0:47:36.482 ******** 2026-03-26 05:50:27.375169 | orchestrator | ok: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-26 05:50:27.375180 | orchestrator | ok: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-26 05:50:27.375191 | orchestrator | ok: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-26 05:50:27.375201 | orchestrator | ok: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-26 05:50:27.375212 | orchestrator | ok: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-26 05:50:27.375222 | orchestrator | 2026-03-26 05:50:27.375233 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-03-26 05:50:27.375243 | orchestrator | Thursday 26 March 2026 05:50:16 +0000 (0:00:03.633) 0:47:40.116 ******** 2026-03-26 05:50:27.375265 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:50:27.375276 | orchestrator | 2026-03-26 05:50:27.375287 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-03-26 05:50:27.375297 | orchestrator | Thursday 26 March 2026 05:50:17 +0000 (0:00:00.765) 0:47:40.882 ******** 2026-03-26 05:50:27.375308 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-5 2026-03-26 05:50:27.375318 | orchestrator | 2026-03-26 05:50:27.375329 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-03-26 05:50:27.375340 | orchestrator | Thursday 26 March 2026 05:50:18 +0000 (0:00:01.113) 0:47:41.995 ******** 2026-03-26 05:50:27.375350 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-26 05:50:27.375361 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-03-26 05:50:27.375371 | orchestrator | 2026-03-26 05:50:27.375382 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-03-26 05:50:27.375392 | orchestrator | Thursday 26 March 2026 05:50:20 +0000 (0:00:01.798) 0:47:43.794 ******** 2026-03-26 05:50:27.375403 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-26 05:50:27.375414 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-26 05:50:27.375424 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-26 05:50:27.375435 | orchestrator | 2026-03-26 05:50:27.375445 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-03-26 05:50:27.375456 | orchestrator | Thursday 26 March 2026 05:50:23 +0000 (0:00:03.176) 0:47:46.971 ******** 2026-03-26 05:50:27.375466 | orchestrator | ok: [testbed-node-5] => (item=None) 2026-03-26 05:50:27.375477 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-26 05:50:27.375488 | orchestrator | ok: [testbed-node-5] 2026-03-26 05:50:27.375498 | orchestrator | 2026-03-26 05:50:27.375509 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-03-26 05:50:27.375519 | orchestrator | Thursday 26 March 2026 05:50:24 +0000 (0:00:01.623) 0:47:48.594 ******** 2026-03-26 05:50:27.375530 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:50:27.375541 | orchestrator | 2026-03-26 05:50:27.375552 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-03-26 05:50:27.375562 | orchestrator | Thursday 26 March 2026 05:50:25 +0000 (0:00:00.906) 0:47:49.501 ******** 2026-03-26 05:50:27.375581 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:50:27.375592 | orchestrator | 2026-03-26 05:50:27.375602 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-03-26 05:50:27.375613 | orchestrator | Thursday 26 March 2026 05:50:26 +0000 (0:00:00.770) 0:47:50.271 ******** 2026-03-26 05:50:27.375624 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:50:27.375634 | orchestrator | 2026-03-26 05:50:27.375655 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-03-26 05:52:44.792755 | orchestrator | Thursday 26 March 2026 05:50:27 +0000 (0:00:00.748) 0:47:51.020 ******** 2026-03-26 05:52:44.792914 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-5 2026-03-26 05:52:44.792931 | orchestrator | 2026-03-26 05:52:44.792945 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-03-26 05:52:44.792956 | orchestrator | Thursday 26 March 2026 05:50:28 +0000 (0:00:01.235) 0:47:52.255 ******** 2026-03-26 05:52:44.792968 | orchestrator | ok: [testbed-node-5] 2026-03-26 05:52:44.792980 | orchestrator | 2026-03-26 05:52:44.792992 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-03-26 05:52:44.793002 | orchestrator | Thursday 26 March 2026 05:50:30 +0000 (0:00:01.465) 0:47:53.721 ******** 2026-03-26 05:52:44.793013 | orchestrator | ok: [testbed-node-5] 2026-03-26 05:52:44.793024 | orchestrator | 2026-03-26 05:52:44.793034 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-03-26 05:52:44.793045 | orchestrator | Thursday 26 March 2026 05:50:33 +0000 (0:00:03.351) 0:47:57.072 ******** 2026-03-26 05:52:44.793085 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-5 2026-03-26 05:52:44.793103 | orchestrator | 2026-03-26 05:52:44.793122 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-03-26 05:52:44.793141 | orchestrator | Thursday 26 March 2026 05:50:34 +0000 (0:00:01.106) 0:47:58.179 ******** 2026-03-26 05:52:44.793158 | orchestrator | ok: [testbed-node-5] 2026-03-26 05:52:44.793175 | orchestrator | 2026-03-26 05:52:44.793193 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-03-26 05:52:44.793212 | orchestrator | Thursday 26 March 2026 05:50:36 +0000 (0:00:01.966) 0:48:00.145 ******** 2026-03-26 05:52:44.793232 | orchestrator | ok: [testbed-node-5] 2026-03-26 05:52:44.793251 | orchestrator | 2026-03-26 05:52:44.793270 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-03-26 05:52:44.793284 | orchestrator | Thursday 26 March 2026 05:50:38 +0000 (0:00:01.927) 0:48:02.073 ******** 2026-03-26 05:52:44.793296 | orchestrator | ok: [testbed-node-5] 2026-03-26 05:52:44.793308 | orchestrator | 2026-03-26 05:52:44.793321 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-03-26 05:52:44.793333 | orchestrator | Thursday 26 March 2026 05:50:40 +0000 (0:00:02.255) 0:48:04.329 ******** 2026-03-26 05:52:44.793346 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:52:44.793360 | orchestrator | 2026-03-26 05:52:44.793371 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-03-26 05:52:44.793381 | orchestrator | Thursday 26 March 2026 05:50:41 +0000 (0:00:01.120) 0:48:05.449 ******** 2026-03-26 05:52:44.793392 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:52:44.793402 | orchestrator | 2026-03-26 05:52:44.793413 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-03-26 05:52:44.793423 | orchestrator | Thursday 26 March 2026 05:50:42 +0000 (0:00:01.154) 0:48:06.604 ******** 2026-03-26 05:52:44.793434 | orchestrator | ok: [testbed-node-5] => (item=4) 2026-03-26 05:52:44.793445 | orchestrator | ok: [testbed-node-5] => (item=1) 2026-03-26 05:52:44.793455 | orchestrator | 2026-03-26 05:52:44.793466 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-03-26 05:52:44.793476 | orchestrator | Thursday 26 March 2026 05:50:44 +0000 (0:00:01.848) 0:48:08.452 ******** 2026-03-26 05:52:44.793488 | orchestrator | ok: [testbed-node-5] => (item=4) 2026-03-26 05:52:44.793498 | orchestrator | ok: [testbed-node-5] => (item=1) 2026-03-26 05:52:44.793509 | orchestrator | 2026-03-26 05:52:44.793519 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-03-26 05:52:44.793530 | orchestrator | Thursday 26 March 2026 05:50:47 +0000 (0:00:02.863) 0:48:11.315 ******** 2026-03-26 05:52:44.793541 | orchestrator | changed: [testbed-node-5] => (item=4) 2026-03-26 05:52:44.793552 | orchestrator | changed: [testbed-node-5] => (item=1) 2026-03-26 05:52:44.793562 | orchestrator | 2026-03-26 05:52:44.793573 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-03-26 05:52:44.793584 | orchestrator | Thursday 26 March 2026 05:50:51 +0000 (0:00:04.175) 0:48:15.491 ******** 2026-03-26 05:52:44.793594 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:52:44.793605 | orchestrator | 2026-03-26 05:52:44.793615 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-03-26 05:52:44.793626 | orchestrator | Thursday 26 March 2026 05:50:52 +0000 (0:00:00.909) 0:48:16.401 ******** 2026-03-26 05:52:44.793637 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-03-26 05:52:44.793649 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-26 05:52:44.793660 | orchestrator | 2026-03-26 05:52:44.793670 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-03-26 05:52:44.793681 | orchestrator | Thursday 26 March 2026 05:51:06 +0000 (0:00:13.348) 0:48:29.749 ******** 2026-03-26 05:52:44.793691 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:52:44.793702 | orchestrator | 2026-03-26 05:52:44.793738 | orchestrator | TASK [Scan ceph-disk osds with ceph-volume if deploying nautilus] ************** 2026-03-26 05:52:44.793760 | orchestrator | Thursday 26 March 2026 05:51:06 +0000 (0:00:00.882) 0:48:30.632 ******** 2026-03-26 05:52:44.793771 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:52:44.793782 | orchestrator | 2026-03-26 05:52:44.793792 | orchestrator | TASK [Activate scanned ceph-disk osds and migrate to ceph-volume if deploying nautilus] *** 2026-03-26 05:52:44.793803 | orchestrator | Thursday 26 March 2026 05:51:07 +0000 (0:00:00.749) 0:48:31.382 ******** 2026-03-26 05:52:44.793813 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:52:44.793824 | orchestrator | 2026-03-26 05:52:44.793835 | orchestrator | TASK [Waiting for clean pgs...] ************************************************ 2026-03-26 05:52:44.793863 | orchestrator | Thursday 26 March 2026 05:51:08 +0000 (0:00:00.787) 0:48:32.169 ******** 2026-03-26 05:52:44.793874 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-26 05:52:44.793885 | orchestrator | 2026-03-26 05:52:44.793896 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-26 05:52:44.793907 | orchestrator | Thursday 26 March 2026 05:51:10 +0000 (0:00:01.931) 0:48:34.101 ******** 2026-03-26 05:52:44.793937 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:52:44.793948 | orchestrator | 2026-03-26 05:52:44.793959 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-03-26 05:52:44.793969 | orchestrator | Thursday 26 March 2026 05:51:11 +0000 (0:00:00.779) 0:48:34.880 ******** 2026-03-26 05:52:44.793980 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:52:44.793991 | orchestrator | 2026-03-26 05:52:44.794001 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-03-26 05:52:44.794012 | orchestrator | Thursday 26 March 2026 05:51:12 +0000 (0:00:00.814) 0:48:35.695 ******** 2026-03-26 05:52:44.794084 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:52:44.794095 | orchestrator | 2026-03-26 05:52:44.794106 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-03-26 05:52:44.794117 | orchestrator | Thursday 26 March 2026 05:51:12 +0000 (0:00:00.795) 0:48:36.491 ******** 2026-03-26 05:52:44.794127 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:52:44.794138 | orchestrator | 2026-03-26 05:52:44.794148 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-03-26 05:52:44.794159 | orchestrator | Thursday 26 March 2026 05:51:13 +0000 (0:00:00.776) 0:48:37.268 ******** 2026-03-26 05:52:44.794169 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:52:44.794180 | orchestrator | 2026-03-26 05:52:44.794190 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-03-26 05:52:44.794201 | orchestrator | Thursday 26 March 2026 05:51:14 +0000 (0:00:00.769) 0:48:38.037 ******** 2026-03-26 05:52:44.794211 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:52:44.794222 | orchestrator | 2026-03-26 05:52:44.794232 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-03-26 05:52:44.794243 | orchestrator | Thursday 26 March 2026 05:51:15 +0000 (0:00:00.764) 0:48:38.802 ******** 2026-03-26 05:52:44.794253 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:52:44.794264 | orchestrator | 2026-03-26 05:52:44.794274 | orchestrator | PLAY [Complete osd upgrade] **************************************************** 2026-03-26 05:52:44.794285 | orchestrator | 2026-03-26 05:52:44.794295 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-26 05:52:44.794306 | orchestrator | Thursday 26 March 2026 05:51:16 +0000 (0:00:01.800) 0:48:40.602 ******** 2026-03-26 05:52:44.794316 | orchestrator | ok: [testbed-node-3] 2026-03-26 05:52:44.794327 | orchestrator | ok: [testbed-node-4] 2026-03-26 05:52:44.794338 | orchestrator | ok: [testbed-node-5] 2026-03-26 05:52:44.794348 | orchestrator | 2026-03-26 05:52:44.794359 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-26 05:52:44.794369 | orchestrator | Thursday 26 March 2026 05:51:18 +0000 (0:00:01.868) 0:48:42.471 ******** 2026-03-26 05:52:44.794380 | orchestrator | ok: [testbed-node-3] 2026-03-26 05:52:44.794390 | orchestrator | ok: [testbed-node-4] 2026-03-26 05:52:44.794409 | orchestrator | ok: [testbed-node-5] 2026-03-26 05:52:44.794419 | orchestrator | 2026-03-26 05:52:44.794430 | orchestrator | TASK [Re-enable pg autoscale on pools] ***************************************** 2026-03-26 05:52:44.794441 | orchestrator | Thursday 26 March 2026 05:51:20 +0000 (0:00:01.358) 0:48:43.830 ******** 2026-03-26 05:52:44.794451 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': '.mgr', 'mode': 'on'}) 2026-03-26 05:52:44.794462 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'cephfs_data', 'mode': 'on'}) 2026-03-26 05:52:44.794473 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'cephfs_metadata', 'mode': 'on'}) 2026-03-26 05:52:44.794484 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.buckets.data', 'mode': 'on'}) 2026-03-26 05:52:44.794497 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.buckets.index', 'mode': 'on'}) 2026-03-26 05:52:44.794508 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.control', 'mode': 'on'}) 2026-03-26 05:52:44.794518 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.log', 'mode': 'on'}) 2026-03-26 05:52:44.794529 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.meta', 'mode': 'on'}) 2026-03-26 05:52:44.794539 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': '.rgw.root', 'mode': 'on'}) 2026-03-26 05:52:44.794550 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'backups', 'mode': 'off'})  2026-03-26 05:52:44.794561 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'volumes', 'mode': 'off'})  2026-03-26 05:52:44.794571 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'images', 'mode': 'off'})  2026-03-26 05:52:44.794582 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'metrics', 'mode': 'off'})  2026-03-26 05:52:44.794592 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vms', 'mode': 'off'})  2026-03-26 05:52:44.794603 | orchestrator | 2026-03-26 05:52:44.794613 | orchestrator | TASK [Unset osd flags] ********************************************************* 2026-03-26 05:52:44.794624 | orchestrator | Thursday 26 March 2026 05:52:34 +0000 (0:01:14.512) 0:49:58.342 ******** 2026-03-26 05:52:44.794634 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=noout) 2026-03-26 05:52:44.794650 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=nodeep-scrub) 2026-03-26 05:52:44.794661 | orchestrator | 2026-03-26 05:52:44.794671 | orchestrator | TASK [Re-enable balancer] ****************************************************** 2026-03-26 05:52:44.794682 | orchestrator | Thursday 26 March 2026 05:52:41 +0000 (0:00:06.923) 0:50:05.265 ******** 2026-03-26 05:52:44.794692 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-26 05:52:44.794703 | orchestrator | 2026-03-26 05:52:44.794755 | orchestrator | PLAY [Upgrade ceph mdss cluster, deactivate all rank > 0] ********************** 2026-03-26 05:53:08.104957 | orchestrator | 2026-03-26 05:53:08.105075 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-26 05:53:08.105091 | orchestrator | Thursday 26 March 2026 05:52:44 +0000 (0:00:03.174) 0:50:08.439 ******** 2026-03-26 05:53:08.105103 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0 2026-03-26 05:53:08.105114 | orchestrator | 2026-03-26 05:53:08.105125 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-26 05:53:08.105136 | orchestrator | Thursday 26 March 2026 05:52:45 +0000 (0:00:01.181) 0:50:09.621 ******** 2026-03-26 05:53:08.105147 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:53:08.105159 | orchestrator | 2026-03-26 05:53:08.105170 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-26 05:53:08.105180 | orchestrator | Thursday 26 March 2026 05:52:47 +0000 (0:00:01.579) 0:50:11.201 ******** 2026-03-26 05:53:08.105218 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:53:08.105229 | orchestrator | 2026-03-26 05:53:08.105240 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-26 05:53:08.105252 | orchestrator | Thursday 26 March 2026 05:52:48 +0000 (0:00:01.207) 0:50:12.409 ******** 2026-03-26 05:53:08.105262 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:53:08.105273 | orchestrator | 2026-03-26 05:53:08.105284 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-26 05:53:08.105295 | orchestrator | Thursday 26 March 2026 05:52:50 +0000 (0:00:01.523) 0:50:13.932 ******** 2026-03-26 05:53:08.105305 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:53:08.105316 | orchestrator | 2026-03-26 05:53:08.105327 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-26 05:53:08.105337 | orchestrator | Thursday 26 March 2026 05:52:51 +0000 (0:00:01.180) 0:50:15.113 ******** 2026-03-26 05:53:08.105348 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:53:08.105358 | orchestrator | 2026-03-26 05:53:08.105369 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-26 05:53:08.105380 | orchestrator | Thursday 26 March 2026 05:52:52 +0000 (0:00:01.197) 0:50:16.310 ******** 2026-03-26 05:53:08.105390 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:53:08.105406 | orchestrator | 2026-03-26 05:53:08.105424 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-26 05:53:08.105444 | orchestrator | Thursday 26 March 2026 05:52:53 +0000 (0:00:01.188) 0:50:17.499 ******** 2026-03-26 05:53:08.105461 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:53:08.105479 | orchestrator | 2026-03-26 05:53:08.105497 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-26 05:53:08.105515 | orchestrator | Thursday 26 March 2026 05:52:55 +0000 (0:00:01.166) 0:50:18.665 ******** 2026-03-26 05:53:08.105533 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:53:08.105551 | orchestrator | 2026-03-26 05:53:08.105570 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-26 05:53:08.105589 | orchestrator | Thursday 26 March 2026 05:52:56 +0000 (0:00:01.243) 0:50:19.909 ******** 2026-03-26 05:53:08.105608 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-26 05:53:08.105627 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-26 05:53:08.105646 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-26 05:53:08.105664 | orchestrator | 2026-03-26 05:53:08.105683 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-26 05:53:08.105703 | orchestrator | Thursday 26 March 2026 05:52:57 +0000 (0:00:01.682) 0:50:21.592 ******** 2026-03-26 05:53:08.105881 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:53:08.105895 | orchestrator | 2026-03-26 05:53:08.105906 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-26 05:53:08.105917 | orchestrator | Thursday 26 March 2026 05:52:59 +0000 (0:00:01.300) 0:50:22.893 ******** 2026-03-26 05:53:08.105928 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-26 05:53:08.105939 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-26 05:53:08.105949 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-26 05:53:08.105960 | orchestrator | 2026-03-26 05:53:08.105971 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-26 05:53:08.105982 | orchestrator | Thursday 26 March 2026 05:53:02 +0000 (0:00:03.160) 0:50:26.053 ******** 2026-03-26 05:53:08.105993 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-26 05:53:08.106003 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-26 05:53:08.106069 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-26 05:53:08.106081 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:53:08.106092 | orchestrator | 2026-03-26 05:53:08.106103 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-26 05:53:08.106128 | orchestrator | Thursday 26 March 2026 05:53:03 +0000 (0:00:01.490) 0:50:27.543 ******** 2026-03-26 05:53:08.106140 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-26 05:53:08.106169 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-26 05:53:08.106205 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-26 05:53:08.106216 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:53:08.106228 | orchestrator | 2026-03-26 05:53:08.106239 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-26 05:53:08.106249 | orchestrator | Thursday 26 March 2026 05:53:05 +0000 (0:00:01.726) 0:50:29.270 ******** 2026-03-26 05:53:08.106263 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-26 05:53:08.106277 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-26 05:53:08.106289 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-26 05:53:08.106300 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:53:08.106311 | orchestrator | 2026-03-26 05:53:08.106322 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-26 05:53:08.106332 | orchestrator | Thursday 26 March 2026 05:53:06 +0000 (0:00:01.215) 0:50:30.486 ******** 2026-03-26 05:53:08.106346 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'de9c3b4c4c57', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-26 05:52:59.842534', 'end': '2026-03-26 05:52:59.895695', 'delta': '0:00:00.053161', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['de9c3b4c4c57'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-26 05:53:08.106361 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'd66b87272f8e', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-26 05:53:00.541458', 'end': '2026-03-26 05:53:00.583420', 'delta': '0:00:00.041962', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['d66b87272f8e'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-26 05:53:08.106385 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'b850f8fd4697', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-26 05:53:01.105903', 'end': '2026-03-26 05:53:01.153302', 'delta': '0:00:00.047399', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['b850f8fd4697'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-26 05:53:08.106397 | orchestrator | 2026-03-26 05:53:08.106408 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-26 05:53:08.106426 | orchestrator | Thursday 26 March 2026 05:53:08 +0000 (0:00:01.263) 0:50:31.749 ******** 2026-03-26 05:53:27.074848 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:53:27.074947 | orchestrator | 2026-03-26 05:53:27.074959 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-26 05:53:27.074969 | orchestrator | Thursday 26 March 2026 05:53:09 +0000 (0:00:01.304) 0:50:33.054 ******** 2026-03-26 05:53:27.074977 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:53:27.074985 | orchestrator | 2026-03-26 05:53:27.074992 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-26 05:53:27.075000 | orchestrator | Thursday 26 March 2026 05:53:10 +0000 (0:00:01.316) 0:50:34.371 ******** 2026-03-26 05:53:27.075007 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:53:27.075014 | orchestrator | 2026-03-26 05:53:27.075021 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-26 05:53:27.075029 | orchestrator | Thursday 26 March 2026 05:53:11 +0000 (0:00:01.155) 0:50:35.526 ******** 2026-03-26 05:53:27.075036 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:53:27.075043 | orchestrator | 2026-03-26 05:53:27.075050 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-26 05:53:27.075057 | orchestrator | Thursday 26 March 2026 05:53:13 +0000 (0:00:02.035) 0:50:37.561 ******** 2026-03-26 05:53:27.075065 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:53:27.075073 | orchestrator | 2026-03-26 05:53:27.075080 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-26 05:53:27.075087 | orchestrator | Thursday 26 March 2026 05:53:15 +0000 (0:00:01.161) 0:50:38.723 ******** 2026-03-26 05:53:27.075095 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:53:27.075102 | orchestrator | 2026-03-26 05:53:27.075109 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-26 05:53:27.075116 | orchestrator | Thursday 26 March 2026 05:53:16 +0000 (0:00:01.250) 0:50:39.974 ******** 2026-03-26 05:53:27.075123 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:53:27.075130 | orchestrator | 2026-03-26 05:53:27.075137 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-26 05:53:27.075145 | orchestrator | Thursday 26 March 2026 05:53:17 +0000 (0:00:01.273) 0:50:41.247 ******** 2026-03-26 05:53:27.075152 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:53:27.075159 | orchestrator | 2026-03-26 05:53:27.075166 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-26 05:53:27.075173 | orchestrator | Thursday 26 March 2026 05:53:18 +0000 (0:00:01.155) 0:50:42.402 ******** 2026-03-26 05:53:27.075180 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:53:27.075204 | orchestrator | 2026-03-26 05:53:27.075211 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-26 05:53:27.075218 | orchestrator | Thursday 26 March 2026 05:53:19 +0000 (0:00:01.147) 0:50:43.550 ******** 2026-03-26 05:53:27.075226 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:53:27.075233 | orchestrator | 2026-03-26 05:53:27.075240 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-26 05:53:27.075247 | orchestrator | Thursday 26 March 2026 05:53:21 +0000 (0:00:01.180) 0:50:44.731 ******** 2026-03-26 05:53:27.075254 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:53:27.075261 | orchestrator | 2026-03-26 05:53:27.075268 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-26 05:53:27.075275 | orchestrator | Thursday 26 March 2026 05:53:22 +0000 (0:00:01.190) 0:50:45.921 ******** 2026-03-26 05:53:27.075283 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:53:27.075290 | orchestrator | 2026-03-26 05:53:27.075297 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-26 05:53:27.075304 | orchestrator | Thursday 26 March 2026 05:53:23 +0000 (0:00:01.132) 0:50:47.054 ******** 2026-03-26 05:53:27.075311 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:53:27.075318 | orchestrator | 2026-03-26 05:53:27.075325 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-26 05:53:27.075332 | orchestrator | Thursday 26 March 2026 05:53:24 +0000 (0:00:01.180) 0:50:48.235 ******** 2026-03-26 05:53:27.075339 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:53:27.075346 | orchestrator | 2026-03-26 05:53:27.075354 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-26 05:53:27.075361 | orchestrator | Thursday 26 March 2026 05:53:25 +0000 (0:00:01.166) 0:50:49.402 ******** 2026-03-26 05:53:27.075370 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:53:27.075380 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:53:27.075400 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:53:27.075425 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-26-01-38-12-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-26 05:53:27.075436 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:53:27.075452 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:53:27.075461 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:53:27.075479 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c374eb4c-3572-4f0b-927c-38d35765f44a', 'scsi-SQEMU_QEMU_HARDDISK_c374eb4c-3572-4f0b-927c-38d35765f44a'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'c374eb4c', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c374eb4c-3572-4f0b-927c-38d35765f44a-part16', 'scsi-SQEMU_QEMU_HARDDISK_c374eb4c-3572-4f0b-927c-38d35765f44a-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c374eb4c-3572-4f0b-927c-38d35765f44a-part14', 'scsi-SQEMU_QEMU_HARDDISK_c374eb4c-3572-4f0b-927c-38d35765f44a-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c374eb4c-3572-4f0b-927c-38d35765f44a-part15', 'scsi-SQEMU_QEMU_HARDDISK_c374eb4c-3572-4f0b-927c-38d35765f44a-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c374eb4c-3572-4f0b-927c-38d35765f44a-part1', 'scsi-SQEMU_QEMU_HARDDISK_c374eb4c-3572-4f0b-927c-38d35765f44a-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-26 05:53:27.075496 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:53:28.401462 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:53:28.401588 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:53:28.401606 | orchestrator | 2026-03-26 05:53:28.401619 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-26 05:53:28.401631 | orchestrator | Thursday 26 March 2026 05:53:27 +0000 (0:00:01.310) 0:50:50.712 ******** 2026-03-26 05:53:28.401646 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:53:28.401660 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:53:28.401672 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:53:28.401685 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-26-01-38-12-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:53:28.401780 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:53:28.401830 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:53:28.401858 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:53:28.401874 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c374eb4c-3572-4f0b-927c-38d35765f44a', 'scsi-SQEMU_QEMU_HARDDISK_c374eb4c-3572-4f0b-927c-38d35765f44a'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'c374eb4c', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c374eb4c-3572-4f0b-927c-38d35765f44a-part16', 'scsi-SQEMU_QEMU_HARDDISK_c374eb4c-3572-4f0b-927c-38d35765f44a-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c374eb4c-3572-4f0b-927c-38d35765f44a-part14', 'scsi-SQEMU_QEMU_HARDDISK_c374eb4c-3572-4f0b-927c-38d35765f44a-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c374eb4c-3572-4f0b-927c-38d35765f44a-part15', 'scsi-SQEMU_QEMU_HARDDISK_c374eb4c-3572-4f0b-927c-38d35765f44a-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c374eb4c-3572-4f0b-927c-38d35765f44a-part1', 'scsi-SQEMU_QEMU_HARDDISK_c374eb4c-3572-4f0b-927c-38d35765f44a-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:53:28.401897 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:53:28.401929 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:54:23.900373 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:54:23.900516 | orchestrator | 2026-03-26 05:54:23.900535 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-26 05:54:23.900548 | orchestrator | Thursday 26 March 2026 05:53:28 +0000 (0:00:01.342) 0:50:52.055 ******** 2026-03-26 05:54:23.900559 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:54:23.900572 | orchestrator | 2026-03-26 05:54:23.900583 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-26 05:54:23.900594 | orchestrator | Thursday 26 March 2026 05:53:29 +0000 (0:00:01.543) 0:50:53.599 ******** 2026-03-26 05:54:23.900605 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:54:23.900615 | orchestrator | 2026-03-26 05:54:23.900626 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-26 05:54:23.900637 | orchestrator | Thursday 26 March 2026 05:53:31 +0000 (0:00:01.182) 0:50:54.781 ******** 2026-03-26 05:54:23.900648 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:54:23.900658 | orchestrator | 2026-03-26 05:54:23.900669 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-26 05:54:23.900680 | orchestrator | Thursday 26 March 2026 05:53:32 +0000 (0:00:01.520) 0:50:56.302 ******** 2026-03-26 05:54:23.900755 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:54:23.900766 | orchestrator | 2026-03-26 05:54:23.900777 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-26 05:54:23.900788 | orchestrator | Thursday 26 March 2026 05:53:33 +0000 (0:00:01.181) 0:50:57.483 ******** 2026-03-26 05:54:23.900800 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:54:23.900811 | orchestrator | 2026-03-26 05:54:23.900821 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-26 05:54:23.900832 | orchestrator | Thursday 26 March 2026 05:53:35 +0000 (0:00:01.248) 0:50:58.732 ******** 2026-03-26 05:54:23.900843 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:54:23.900854 | orchestrator | 2026-03-26 05:54:23.900865 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-26 05:54:23.900875 | orchestrator | Thursday 26 March 2026 05:53:36 +0000 (0:00:01.141) 0:50:59.873 ******** 2026-03-26 05:54:23.900886 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-26 05:54:23.900897 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-26 05:54:23.900908 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-26 05:54:23.900919 | orchestrator | 2026-03-26 05:54:23.900929 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-26 05:54:23.900940 | orchestrator | Thursday 26 March 2026 05:53:37 +0000 (0:00:01.714) 0:51:01.588 ******** 2026-03-26 05:54:23.900951 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-26 05:54:23.900961 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-26 05:54:23.900972 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-26 05:54:23.900983 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:54:23.900994 | orchestrator | 2026-03-26 05:54:23.901004 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-26 05:54:23.901015 | orchestrator | Thursday 26 March 2026 05:53:39 +0000 (0:00:01.198) 0:51:02.786 ******** 2026-03-26 05:54:23.901026 | orchestrator | skipping: [testbed-node-0] 2026-03-26 05:54:23.901036 | orchestrator | 2026-03-26 05:54:23.901047 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-26 05:54:23.901058 | orchestrator | Thursday 26 March 2026 05:53:40 +0000 (0:00:01.157) 0:51:03.944 ******** 2026-03-26 05:54:23.901093 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-26 05:54:23.901105 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-26 05:54:23.901117 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-26 05:54:23.901128 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-26 05:54:23.901138 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-26 05:54:23.901149 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-26 05:54:23.901159 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-26 05:54:23.901170 | orchestrator | 2026-03-26 05:54:23.901195 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-26 05:54:23.901206 | orchestrator | Thursday 26 March 2026 05:53:42 +0000 (0:00:02.213) 0:51:06.157 ******** 2026-03-26 05:54:23.901217 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-26 05:54:23.901227 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-26 05:54:23.901238 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-26 05:54:23.901249 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-26 05:54:23.901259 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-26 05:54:23.901270 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-26 05:54:23.901280 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-26 05:54:23.901291 | orchestrator | 2026-03-26 05:54:23.901302 | orchestrator | TASK [Set max_mds 1 on ceph fs] ************************************************ 2026-03-26 05:54:23.901312 | orchestrator | Thursday 26 March 2026 05:53:45 +0000 (0:00:02.776) 0:51:08.933 ******** 2026-03-26 05:54:23.901323 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:54:23.901333 | orchestrator | 2026-03-26 05:54:23.901344 | orchestrator | TASK [Wait until only rank 0 is up] ******************************************** 2026-03-26 05:54:23.901354 | orchestrator | Thursday 26 March 2026 05:53:48 +0000 (0:00:03.190) 0:51:12.124 ******** 2026-03-26 05:54:23.901365 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:54:23.901376 | orchestrator | 2026-03-26 05:54:23.901406 | orchestrator | TASK [Get name of remaining active mds] **************************************** 2026-03-26 05:54:23.901418 | orchestrator | Thursday 26 March 2026 05:53:51 +0000 (0:00:02.985) 0:51:15.110 ******** 2026-03-26 05:54:23.901429 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:54:23.901440 | orchestrator | 2026-03-26 05:54:23.901450 | orchestrator | TASK [Set_fact mds_active_name] ************************************************ 2026-03-26 05:54:23.901461 | orchestrator | Thursday 26 March 2026 05:53:53 +0000 (0:00:02.115) 0:51:17.225 ******** 2026-03-26 05:54:23.901475 | orchestrator | ok: [testbed-node-0] => (item={'key': 'gid_4752', 'value': {'gid': 4752, 'name': 'testbed-node-3', 'rank': 0, 'incarnation': 4, 'state': 'up:active', 'state_seq': 2, 'addr': '192.168.16.13:6817/3459244977', 'addrs': {'addrvec': [{'type': 'v2', 'addr': '192.168.16.13:6816', 'nonce': 3459244977}, {'type': 'v1', 'addr': '192.168.16.13:6817', 'nonce': 3459244977}]}, 'join_fscid': -1, 'export_targets': [], 'features': 4540138322906710015, 'flags': 0, 'compat': {'compat': {}, 'ro_compat': {}, 'incompat': {'feature_1': 'base v0.20', 'feature_2': 'client writeable ranges', 'feature_3': 'default file layouts on dirs', 'feature_4': 'dir inode in separate object', 'feature_5': 'mds uses versioned encoding', 'feature_6': 'dirfrag is stored in omap', 'feature_7': 'mds uses inline data', 'feature_8': 'no anchor table', 'feature_9': 'file layout v2', 'feature_10': 'snaprealm v2'}}}}) 2026-03-26 05:54:23.901490 | orchestrator | 2026-03-26 05:54:23.901501 | orchestrator | TASK [Set_fact mds_active_host] ************************************************ 2026-03-26 05:54:23.901521 | orchestrator | Thursday 26 March 2026 05:53:54 +0000 (0:00:01.240) 0:51:18.466 ******** 2026-03-26 05:54:23.901532 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-3) 2026-03-26 05:54:23.901542 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-03-26 05:54:23.901553 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-03-26 05:54:23.901563 | orchestrator | 2026-03-26 05:54:23.901574 | orchestrator | TASK [Create standby_mdss group] *********************************************** 2026-03-26 05:54:23.901584 | orchestrator | Thursday 26 March 2026 05:53:56 +0000 (0:00:01.553) 0:51:20.020 ******** 2026-03-26 05:54:23.901595 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-4) 2026-03-26 05:54:23.901605 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-5) 2026-03-26 05:54:23.901616 | orchestrator | 2026-03-26 05:54:23.901626 | orchestrator | TASK [Stop standby ceph mds] *************************************************** 2026-03-26 05:54:23.901636 | orchestrator | Thursday 26 March 2026 05:53:57 +0000 (0:00:01.559) 0:51:21.579 ******** 2026-03-26 05:54:23.901647 | orchestrator | changed: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-26 05:54:23.901658 | orchestrator | changed: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-26 05:54:23.901676 | orchestrator | 2026-03-26 05:54:23.901718 | orchestrator | TASK [Mask systemd units for standby ceph mds] ********************************* 2026-03-26 05:54:23.901736 | orchestrator | Thursday 26 March 2026 05:54:09 +0000 (0:00:11.167) 0:51:32.747 ******** 2026-03-26 05:54:23.901754 | orchestrator | changed: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-26 05:54:23.901773 | orchestrator | changed: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-26 05:54:23.901787 | orchestrator | 2026-03-26 05:54:23.901798 | orchestrator | TASK [Wait until all standbys mds are stopped] ********************************* 2026-03-26 05:54:23.901809 | orchestrator | Thursday 26 March 2026 05:54:13 +0000 (0:00:04.755) 0:51:37.502 ******** 2026-03-26 05:54:23.901819 | orchestrator | ok: [testbed-node-0] 2026-03-26 05:54:23.901830 | orchestrator | 2026-03-26 05:54:23.901841 | orchestrator | TASK [Create active_mdss group] ************************************************ 2026-03-26 05:54:23.901851 | orchestrator | Thursday 26 March 2026 05:54:16 +0000 (0:00:02.186) 0:51:39.688 ******** 2026-03-26 05:54:23.901869 | orchestrator | changed: [testbed-node-0] 2026-03-26 05:54:23.901879 | orchestrator | 2026-03-26 05:54:23.901890 | orchestrator | PLAY [Upgrade active mds] ****************************************************** 2026-03-26 05:54:23.901900 | orchestrator | 2026-03-26 05:54:23.901911 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-26 05:54:23.901922 | orchestrator | Thursday 26 March 2026 05:54:17 +0000 (0:00:01.601) 0:51:41.290 ******** 2026-03-26 05:54:23.901932 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3 2026-03-26 05:54:23.901943 | orchestrator | 2026-03-26 05:54:23.901953 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-26 05:54:23.901964 | orchestrator | Thursday 26 March 2026 05:54:18 +0000 (0:00:01.144) 0:51:42.434 ******** 2026-03-26 05:54:23.901974 | orchestrator | ok: [testbed-node-3] 2026-03-26 05:54:23.901985 | orchestrator | 2026-03-26 05:54:23.901995 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-26 05:54:23.902006 | orchestrator | Thursday 26 March 2026 05:54:20 +0000 (0:00:01.438) 0:51:43.873 ******** 2026-03-26 05:54:23.902078 | orchestrator | ok: [testbed-node-3] 2026-03-26 05:54:23.902090 | orchestrator | 2026-03-26 05:54:23.902101 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-26 05:54:23.902112 | orchestrator | Thursday 26 March 2026 05:54:21 +0000 (0:00:01.110) 0:51:44.983 ******** 2026-03-26 05:54:23.902122 | orchestrator | ok: [testbed-node-3] 2026-03-26 05:54:23.902133 | orchestrator | 2026-03-26 05:54:23.902144 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-26 05:54:23.902155 | orchestrator | Thursday 26 March 2026 05:54:22 +0000 (0:00:01.428) 0:51:46.412 ******** 2026-03-26 05:54:23.902174 | orchestrator | ok: [testbed-node-3] 2026-03-26 05:54:23.902185 | orchestrator | 2026-03-26 05:54:23.902205 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-26 05:54:48.718930 | orchestrator | Thursday 26 March 2026 05:54:23 +0000 (0:00:01.135) 0:51:47.548 ******** 2026-03-26 05:54:48.719037 | orchestrator | ok: [testbed-node-3] 2026-03-26 05:54:48.719051 | orchestrator | 2026-03-26 05:54:48.719062 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-26 05:54:48.719072 | orchestrator | Thursday 26 March 2026 05:54:25 +0000 (0:00:01.113) 0:51:48.661 ******** 2026-03-26 05:54:48.719082 | orchestrator | ok: [testbed-node-3] 2026-03-26 05:54:48.719092 | orchestrator | 2026-03-26 05:54:48.719101 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-26 05:54:48.719112 | orchestrator | Thursday 26 March 2026 05:54:26 +0000 (0:00:01.181) 0:51:49.842 ******** 2026-03-26 05:54:48.719121 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:54:48.719131 | orchestrator | 2026-03-26 05:54:48.719141 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-26 05:54:48.719150 | orchestrator | Thursday 26 March 2026 05:54:27 +0000 (0:00:01.134) 0:51:50.977 ******** 2026-03-26 05:54:48.719160 | orchestrator | ok: [testbed-node-3] 2026-03-26 05:54:48.719169 | orchestrator | 2026-03-26 05:54:48.719179 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-26 05:54:48.719188 | orchestrator | Thursday 26 March 2026 05:54:28 +0000 (0:00:01.129) 0:51:52.107 ******** 2026-03-26 05:54:48.719198 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-26 05:54:48.719207 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-26 05:54:48.719217 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-26 05:54:48.719226 | orchestrator | 2026-03-26 05:54:48.719236 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-26 05:54:48.719245 | orchestrator | Thursday 26 March 2026 05:54:30 +0000 (0:00:01.751) 0:51:53.858 ******** 2026-03-26 05:54:48.719255 | orchestrator | ok: [testbed-node-3] 2026-03-26 05:54:48.719264 | orchestrator | 2026-03-26 05:54:48.719274 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-26 05:54:48.719284 | orchestrator | Thursday 26 March 2026 05:54:31 +0000 (0:00:01.273) 0:51:55.131 ******** 2026-03-26 05:54:48.719293 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-26 05:54:48.719303 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-26 05:54:48.719312 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-26 05:54:48.719321 | orchestrator | 2026-03-26 05:54:48.719331 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-26 05:54:48.719341 | orchestrator | Thursday 26 March 2026 05:54:34 +0000 (0:00:02.843) 0:51:57.975 ******** 2026-03-26 05:54:48.719350 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-26 05:54:48.719360 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-26 05:54:48.719369 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-26 05:54:48.719378 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:54:48.719388 | orchestrator | 2026-03-26 05:54:48.719397 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-26 05:54:48.719407 | orchestrator | Thursday 26 March 2026 05:54:35 +0000 (0:00:01.507) 0:51:59.482 ******** 2026-03-26 05:54:48.719417 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-26 05:54:48.719429 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-26 05:54:48.719478 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-26 05:54:48.719490 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:54:48.719502 | orchestrator | 2026-03-26 05:54:48.719513 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-26 05:54:48.719525 | orchestrator | Thursday 26 March 2026 05:54:37 +0000 (0:00:01.981) 0:52:01.464 ******** 2026-03-26 05:54:48.719538 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-26 05:54:48.719567 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-26 05:54:48.719580 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-26 05:54:48.719591 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:54:48.719602 | orchestrator | 2026-03-26 05:54:48.719614 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-26 05:54:48.719624 | orchestrator | Thursday 26 March 2026 05:54:38 +0000 (0:00:01.153) 0:52:02.618 ******** 2026-03-26 05:54:48.719638 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'de9c3b4c4c57', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-26 05:54:32.014467', 'end': '2026-03-26 05:54:32.067837', 'delta': '0:00:00.053370', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['de9c3b4c4c57'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-26 05:54:48.719652 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'd66b87272f8e', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-26 05:54:32.555012', 'end': '2026-03-26 05:54:32.590382', 'delta': '0:00:00.035370', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['d66b87272f8e'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-26 05:54:48.719669 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'b850f8fd4697', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-26 05:54:33.112871', 'end': '2026-03-26 05:54:33.160580', 'delta': '0:00:00.047709', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['b850f8fd4697'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-26 05:54:48.719718 | orchestrator | 2026-03-26 05:54:48.719731 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-26 05:54:48.719742 | orchestrator | Thursday 26 March 2026 05:54:40 +0000 (0:00:01.208) 0:52:03.826 ******** 2026-03-26 05:54:48.719753 | orchestrator | ok: [testbed-node-3] 2026-03-26 05:54:48.719764 | orchestrator | 2026-03-26 05:54:48.719775 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-26 05:54:48.719786 | orchestrator | Thursday 26 March 2026 05:54:41 +0000 (0:00:01.304) 0:52:05.131 ******** 2026-03-26 05:54:48.719797 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:54:48.719807 | orchestrator | 2026-03-26 05:54:48.719818 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-26 05:54:48.719829 | orchestrator | Thursday 26 March 2026 05:54:43 +0000 (0:00:01.662) 0:52:06.793 ******** 2026-03-26 05:54:48.719840 | orchestrator | ok: [testbed-node-3] 2026-03-26 05:54:48.719852 | orchestrator | 2026-03-26 05:54:48.719862 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-26 05:54:48.719872 | orchestrator | Thursday 26 March 2026 05:54:44 +0000 (0:00:01.149) 0:52:07.942 ******** 2026-03-26 05:54:48.719881 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-26 05:54:48.719891 | orchestrator | 2026-03-26 05:54:48.719900 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-26 05:54:48.719910 | orchestrator | Thursday 26 March 2026 05:54:46 +0000 (0:00:02.011) 0:52:09.954 ******** 2026-03-26 05:54:48.719919 | orchestrator | ok: [testbed-node-3] 2026-03-26 05:54:48.719928 | orchestrator | 2026-03-26 05:54:48.719938 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-26 05:54:48.719947 | orchestrator | Thursday 26 March 2026 05:54:47 +0000 (0:00:01.248) 0:52:11.204 ******** 2026-03-26 05:54:48.719963 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:54:58.380138 | orchestrator | 2026-03-26 05:54:58.380257 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-26 05:54:58.380275 | orchestrator | Thursday 26 March 2026 05:54:48 +0000 (0:00:01.161) 0:52:12.365 ******** 2026-03-26 05:54:58.380288 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:54:58.380301 | orchestrator | 2026-03-26 05:54:58.380313 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-26 05:54:58.380325 | orchestrator | Thursday 26 March 2026 05:54:49 +0000 (0:00:01.263) 0:52:13.629 ******** 2026-03-26 05:54:58.380335 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:54:58.380346 | orchestrator | 2026-03-26 05:54:58.380357 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-26 05:54:58.380369 | orchestrator | Thursday 26 March 2026 05:54:51 +0000 (0:00:01.166) 0:52:14.795 ******** 2026-03-26 05:54:58.380381 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:54:58.380392 | orchestrator | 2026-03-26 05:54:58.380402 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-26 05:54:58.380413 | orchestrator | Thursday 26 March 2026 05:54:52 +0000 (0:00:01.125) 0:52:15.921 ******** 2026-03-26 05:54:58.380425 | orchestrator | ok: [testbed-node-3] 2026-03-26 05:54:58.380437 | orchestrator | 2026-03-26 05:54:58.380448 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-26 05:54:58.380459 | orchestrator | Thursday 26 March 2026 05:54:53 +0000 (0:00:01.166) 0:52:17.087 ******** 2026-03-26 05:54:58.380492 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:54:58.380504 | orchestrator | 2026-03-26 05:54:58.380516 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-26 05:54:58.380527 | orchestrator | Thursday 26 March 2026 05:54:54 +0000 (0:00:01.114) 0:52:18.201 ******** 2026-03-26 05:54:58.380538 | orchestrator | ok: [testbed-node-3] 2026-03-26 05:54:58.380549 | orchestrator | 2026-03-26 05:54:58.380560 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-26 05:54:58.380571 | orchestrator | Thursday 26 March 2026 05:54:55 +0000 (0:00:01.226) 0:52:19.428 ******** 2026-03-26 05:54:58.380582 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:54:58.380593 | orchestrator | 2026-03-26 05:54:58.380605 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-26 05:54:58.380617 | orchestrator | Thursday 26 March 2026 05:54:56 +0000 (0:00:01.129) 0:52:20.557 ******** 2026-03-26 05:54:58.380627 | orchestrator | ok: [testbed-node-3] 2026-03-26 05:54:58.380638 | orchestrator | 2026-03-26 05:54:58.380649 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-26 05:54:58.380659 | orchestrator | Thursday 26 March 2026 05:54:58 +0000 (0:00:01.171) 0:52:21.729 ******** 2026-03-26 05:54:58.380708 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:54:58.380743 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--e2623153--bc41--510f--8884--ef957bb96082-osd--block--e2623153--bc41--510f--8884--ef957bb96082', 'dm-uuid-LVM-8hKVl461SF70Ai5uMDmNdT5BP20Vvkg8AxHs2aTbdloCZd5zRhurro2iqvFnFzRY'], 'uuids': ['c579629d-afc9-41d5-a76c-63e3abbafb40'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '863ba5d2', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['AxHs2a-Tbdl-oCZd-5zRh-urro-2iqv-FnFzRY']}})  2026-03-26 05:54:58.380760 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2dae49df-17cb-48b5-9940-ec5e7ec792d8', 'scsi-SQEMU_QEMU_HARDDISK_2dae49df-17cb-48b5-9940-ec5e7ec792d8'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2dae49df', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-26 05:54:58.380793 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-2XKfyD-kvYx-XaUk-IA1D-OFMu-auWL-FeQHCw', 'scsi-0QEMU_QEMU_HARDDISK_d11e4e4a-db1d-44df-8da9-5de7e993dd80', 'scsi-SQEMU_QEMU_HARDDISK_d11e4e4a-db1d-44df-8da9-5de7e993dd80'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'd11e4e4a', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--93e8c9a2--b6ff--5fe0--a79e--2922336c3e0a-osd--block--93e8c9a2--b6ff--5fe0--a79e--2922336c3e0a']}})  2026-03-26 05:54:58.380808 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:54:58.380830 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:54:58.380844 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-26-01-38-13-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-26 05:54:58.380857 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:54:58.380870 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-y5xuVg-NQ7T-0MWa-shc4-xC7n-HJ3V-UNBCRS', 'dm-uuid-CRYPT-LUKS2-aef43475035b4229a7d71e3432ab4dcb-y5xuVg-NQ7T-0MWa-shc4-xC7n-HJ3V-UNBCRS'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-26 05:54:58.380888 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:54:58.380902 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--93e8c9a2--b6ff--5fe0--a79e--2922336c3e0a-osd--block--93e8c9a2--b6ff--5fe0--a79e--2922336c3e0a', 'dm-uuid-LVM-NfuOn4R5AkCZoZBaGfCwjgSejX4qlSlby5xuVgNQ7T0MWashc4xC7nHJ3VUNBCRS'], 'uuids': ['aef43475-035b-4229-a7d7-1e3432ab4dcb'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'd11e4e4a', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['y5xuVg-NQ7T-0MWa-shc4-xC7n-HJ3V-UNBCRS']}})  2026-03-26 05:54:58.380922 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-dxNnp3-HdCF-97hz-w17k-bHEu-opcA-g4y34j', 'scsi-0QEMU_QEMU_HARDDISK_863ba5d2-7e2f-4393-95a6-83543745d331', 'scsi-SQEMU_QEMU_HARDDISK_863ba5d2-7e2f-4393-95a6-83543745d331'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '863ba5d2', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--e2623153--bc41--510f--8884--ef957bb96082-osd--block--e2623153--bc41--510f--8884--ef957bb96082']}})  2026-03-26 05:54:59.832814 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:54:59.832947 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce600cf2-62c4-44aa-8248-5535335c6519', 'scsi-SQEMU_QEMU_HARDDISK_ce600cf2-62c4-44aa-8248-5535335c6519'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'ce600cf2', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce600cf2-62c4-44aa-8248-5535335c6519-part16', 'scsi-SQEMU_QEMU_HARDDISK_ce600cf2-62c4-44aa-8248-5535335c6519-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce600cf2-62c4-44aa-8248-5535335c6519-part14', 'scsi-SQEMU_QEMU_HARDDISK_ce600cf2-62c4-44aa-8248-5535335c6519-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce600cf2-62c4-44aa-8248-5535335c6519-part15', 'scsi-SQEMU_QEMU_HARDDISK_ce600cf2-62c4-44aa-8248-5535335c6519-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce600cf2-62c4-44aa-8248-5535335c6519-part1', 'scsi-SQEMU_QEMU_HARDDISK_ce600cf2-62c4-44aa-8248-5535335c6519-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-26 05:54:59.832968 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:54:59.833706 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:54:59.833728 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-AxHs2a-Tbdl-oCZd-5zRh-urro-2iqv-FnFzRY', 'dm-uuid-CRYPT-LUKS2-c579629dafc941d5a76c63e3abbafb40-AxHs2a-Tbdl-oCZd-5zRh-urro-2iqv-FnFzRY'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-26 05:54:59.833761 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:54:59.833773 | orchestrator | 2026-03-26 05:54:59.833800 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-26 05:54:59.833812 | orchestrator | Thursday 26 March 2026 05:54:59 +0000 (0:00:01.516) 0:52:23.246 ******** 2026-03-26 05:54:59.833823 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:54:59.833849 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--e2623153--bc41--510f--8884--ef957bb96082-osd--block--e2623153--bc41--510f--8884--ef957bb96082', 'dm-uuid-LVM-8hKVl461SF70Ai5uMDmNdT5BP20Vvkg8AxHs2aTbdloCZd5zRhurro2iqvFnFzRY'], 'uuids': ['c579629d-afc9-41d5-a76c-63e3abbafb40'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '863ba5d2', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['AxHs2a-Tbdl-oCZd-5zRh-urro-2iqv-FnFzRY']}}, 'ansible_loop_var': 'item'})  2026-03-26 05:54:59.833874 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2dae49df-17cb-48b5-9940-ec5e7ec792d8', 'scsi-SQEMU_QEMU_HARDDISK_2dae49df-17cb-48b5-9940-ec5e7ec792d8'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2dae49df', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:54:59.833983 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-2XKfyD-kvYx-XaUk-IA1D-OFMu-auWL-FeQHCw', 'scsi-0QEMU_QEMU_HARDDISK_d11e4e4a-db1d-44df-8da9-5de7e993dd80', 'scsi-SQEMU_QEMU_HARDDISK_d11e4e4a-db1d-44df-8da9-5de7e993dd80'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'd11e4e4a', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--93e8c9a2--b6ff--5fe0--a79e--2922336c3e0a-osd--block--93e8c9a2--b6ff--5fe0--a79e--2922336c3e0a']}}, 'ansible_loop_var': 'item'})  2026-03-26 05:54:59.834063 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:54:59.834090 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:55:01.024997 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-26-01-38-13-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:55:01.025108 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:55:01.025142 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-y5xuVg-NQ7T-0MWa-shc4-xC7n-HJ3V-UNBCRS', 'dm-uuid-CRYPT-LUKS2-aef43475035b4229a7d71e3432ab4dcb-y5xuVg-NQ7T-0MWa-shc4-xC7n-HJ3V-UNBCRS'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:55:01.025155 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:55:01.025189 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--93e8c9a2--b6ff--5fe0--a79e--2922336c3e0a-osd--block--93e8c9a2--b6ff--5fe0--a79e--2922336c3e0a', 'dm-uuid-LVM-NfuOn4R5AkCZoZBaGfCwjgSejX4qlSlby5xuVgNQ7T0MWashc4xC7nHJ3VUNBCRS'], 'uuids': ['aef43475-035b-4229-a7d7-1e3432ab4dcb'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'd11e4e4a', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['y5xuVg-NQ7T-0MWa-shc4-xC7n-HJ3V-UNBCRS']}}, 'ansible_loop_var': 'item'})  2026-03-26 05:55:01.025224 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-dxNnp3-HdCF-97hz-w17k-bHEu-opcA-g4y34j', 'scsi-0QEMU_QEMU_HARDDISK_863ba5d2-7e2f-4393-95a6-83543745d331', 'scsi-SQEMU_QEMU_HARDDISK_863ba5d2-7e2f-4393-95a6-83543745d331'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '863ba5d2', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--e2623153--bc41--510f--8884--ef957bb96082-osd--block--e2623153--bc41--510f--8884--ef957bb96082']}}, 'ansible_loop_var': 'item'})  2026-03-26 05:55:01.025240 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:55:01.025262 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce600cf2-62c4-44aa-8248-5535335c6519', 'scsi-SQEMU_QEMU_HARDDISK_ce600cf2-62c4-44aa-8248-5535335c6519'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'ce600cf2', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce600cf2-62c4-44aa-8248-5535335c6519-part16', 'scsi-SQEMU_QEMU_HARDDISK_ce600cf2-62c4-44aa-8248-5535335c6519-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce600cf2-62c4-44aa-8248-5535335c6519-part14', 'scsi-SQEMU_QEMU_HARDDISK_ce600cf2-62c4-44aa-8248-5535335c6519-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce600cf2-62c4-44aa-8248-5535335c6519-part15', 'scsi-SQEMU_QEMU_HARDDISK_ce600cf2-62c4-44aa-8248-5535335c6519-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce600cf2-62c4-44aa-8248-5535335c6519-part1', 'scsi-SQEMU_QEMU_HARDDISK_ce600cf2-62c4-44aa-8248-5535335c6519-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:55:01.025282 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:55:01.025302 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:55:35.885719 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-AxHs2a-Tbdl-oCZd-5zRh-urro-2iqv-FnFzRY', 'dm-uuid-CRYPT-LUKS2-c579629dafc941d5a76c63e3abbafb40-AxHs2a-Tbdl-oCZd-5zRh-urro-2iqv-FnFzRY'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:55:35.885855 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:55:35.885885 | orchestrator | 2026-03-26 05:55:35.885906 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-26 05:55:35.885926 | orchestrator | Thursday 26 March 2026 05:55:01 +0000 (0:00:01.431) 0:52:24.678 ******** 2026-03-26 05:55:35.885945 | orchestrator | ok: [testbed-node-3] 2026-03-26 05:55:35.885965 | orchestrator | 2026-03-26 05:55:35.885983 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-26 05:55:35.886001 | orchestrator | Thursday 26 March 2026 05:55:02 +0000 (0:00:01.579) 0:52:26.257 ******** 2026-03-26 05:55:35.886100 | orchestrator | ok: [testbed-node-3] 2026-03-26 05:55:35.886125 | orchestrator | 2026-03-26 05:55:35.886164 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-26 05:55:35.886186 | orchestrator | Thursday 26 March 2026 05:55:03 +0000 (0:00:01.132) 0:52:27.390 ******** 2026-03-26 05:55:35.886205 | orchestrator | ok: [testbed-node-3] 2026-03-26 05:55:35.886225 | orchestrator | 2026-03-26 05:55:35.886278 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-26 05:55:35.886298 | orchestrator | Thursday 26 March 2026 05:55:05 +0000 (0:00:01.478) 0:52:28.869 ******** 2026-03-26 05:55:35.886318 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:55:35.886336 | orchestrator | 2026-03-26 05:55:35.886354 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-26 05:55:35.886374 | orchestrator | Thursday 26 March 2026 05:55:06 +0000 (0:00:01.211) 0:52:30.081 ******** 2026-03-26 05:55:35.886393 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:55:35.886412 | orchestrator | 2026-03-26 05:55:35.886431 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-26 05:55:35.886450 | orchestrator | Thursday 26 March 2026 05:55:07 +0000 (0:00:01.192) 0:52:31.273 ******** 2026-03-26 05:55:35.886469 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:55:35.886488 | orchestrator | 2026-03-26 05:55:35.886507 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-26 05:55:35.886526 | orchestrator | Thursday 26 March 2026 05:55:08 +0000 (0:00:01.107) 0:52:32.381 ******** 2026-03-26 05:55:35.886544 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-03-26 05:55:35.886564 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-03-26 05:55:35.886582 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-03-26 05:55:35.886602 | orchestrator | 2026-03-26 05:55:35.886613 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-26 05:55:35.886624 | orchestrator | Thursday 26 March 2026 05:55:10 +0000 (0:00:01.840) 0:52:34.221 ******** 2026-03-26 05:55:35.886635 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-26 05:55:35.886646 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-26 05:55:35.886656 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-26 05:55:35.886825 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:55:35.886851 | orchestrator | 2026-03-26 05:55:35.886864 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-26 05:55:35.886875 | orchestrator | Thursday 26 March 2026 05:55:11 +0000 (0:00:01.134) 0:52:35.356 ******** 2026-03-26 05:55:35.886886 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3 2026-03-26 05:55:35.886897 | orchestrator | 2026-03-26 05:55:35.886908 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-26 05:55:35.886920 | orchestrator | Thursday 26 March 2026 05:55:12 +0000 (0:00:01.111) 0:52:36.468 ******** 2026-03-26 05:55:35.886930 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:55:35.886941 | orchestrator | 2026-03-26 05:55:35.886952 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-26 05:55:35.886962 | orchestrator | Thursday 26 March 2026 05:55:13 +0000 (0:00:01.174) 0:52:37.642 ******** 2026-03-26 05:55:35.886973 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:55:35.887000 | orchestrator | 2026-03-26 05:55:35.887011 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-26 05:55:35.887022 | orchestrator | Thursday 26 March 2026 05:55:15 +0000 (0:00:01.347) 0:52:38.990 ******** 2026-03-26 05:55:35.887043 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:55:35.887054 | orchestrator | 2026-03-26 05:55:35.887064 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-26 05:55:35.887075 | orchestrator | Thursday 26 March 2026 05:55:16 +0000 (0:00:01.161) 0:52:40.152 ******** 2026-03-26 05:55:35.887085 | orchestrator | ok: [testbed-node-3] 2026-03-26 05:55:35.887096 | orchestrator | 2026-03-26 05:55:35.887107 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-26 05:55:35.887117 | orchestrator | Thursday 26 March 2026 05:55:17 +0000 (0:00:01.327) 0:52:41.479 ******** 2026-03-26 05:55:35.887128 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-26 05:55:35.887162 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-26 05:55:35.887187 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-26 05:55:35.887198 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:55:35.887209 | orchestrator | 2026-03-26 05:55:35.887220 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-26 05:55:35.887231 | orchestrator | Thursday 26 March 2026 05:55:19 +0000 (0:00:01.450) 0:52:42.930 ******** 2026-03-26 05:55:35.887241 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-26 05:55:35.887252 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-26 05:55:35.887262 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-26 05:55:35.887274 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:55:35.887285 | orchestrator | 2026-03-26 05:55:35.887295 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-26 05:55:35.887306 | orchestrator | Thursday 26 March 2026 05:55:20 +0000 (0:00:01.404) 0:52:44.335 ******** 2026-03-26 05:55:35.887317 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-26 05:55:35.887327 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-26 05:55:35.887338 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-26 05:55:35.887349 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:55:35.887359 | orchestrator | 2026-03-26 05:55:35.887370 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-26 05:55:35.887381 | orchestrator | Thursday 26 March 2026 05:55:22 +0000 (0:00:01.468) 0:52:45.803 ******** 2026-03-26 05:55:35.887391 | orchestrator | ok: [testbed-node-3] 2026-03-26 05:55:35.887402 | orchestrator | 2026-03-26 05:55:35.887412 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-26 05:55:35.887533 | orchestrator | Thursday 26 March 2026 05:55:23 +0000 (0:00:01.194) 0:52:46.998 ******** 2026-03-26 05:55:35.887545 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-26 05:55:35.887556 | orchestrator | 2026-03-26 05:55:35.887566 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-26 05:55:35.887577 | orchestrator | Thursday 26 March 2026 05:55:24 +0000 (0:00:01.338) 0:52:48.336 ******** 2026-03-26 05:55:35.887588 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-26 05:55:35.887599 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-26 05:55:35.887609 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-26 05:55:35.887620 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-26 05:55:35.887630 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-26 05:55:35.887641 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-26 05:55:35.887651 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-26 05:55:35.887681 | orchestrator | 2026-03-26 05:55:35.887693 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-26 05:55:35.887704 | orchestrator | Thursday 26 March 2026 05:55:26 +0000 (0:00:02.312) 0:52:50.649 ******** 2026-03-26 05:55:35.887714 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-26 05:55:35.887725 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-26 05:55:35.887735 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-26 05:55:35.887746 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-26 05:55:35.887756 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-26 05:55:35.887767 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-26 05:55:35.887777 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-26 05:55:35.887797 | orchestrator | 2026-03-26 05:55:35.887808 | orchestrator | TASK [Prevent restart from the packaging] ************************************** 2026-03-26 05:55:35.887819 | orchestrator | Thursday 26 March 2026 05:55:29 +0000 (0:00:02.663) 0:52:53.312 ******** 2026-03-26 05:55:35.887830 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:55:35.887840 | orchestrator | 2026-03-26 05:55:35.887851 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-26 05:55:35.887861 | orchestrator | Thursday 26 March 2026 05:55:30 +0000 (0:00:01.132) 0:52:54.445 ******** 2026-03-26 05:55:35.887872 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3 2026-03-26 05:55:35.887883 | orchestrator | 2026-03-26 05:55:35.887894 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-26 05:55:35.887904 | orchestrator | Thursday 26 March 2026 05:55:31 +0000 (0:00:01.144) 0:52:55.589 ******** 2026-03-26 05:55:35.887915 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3 2026-03-26 05:55:35.887926 | orchestrator | 2026-03-26 05:55:35.887936 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-26 05:55:35.887947 | orchestrator | Thursday 26 March 2026 05:55:33 +0000 (0:00:01.308) 0:52:56.898 ******** 2026-03-26 05:55:35.887957 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:55:35.887968 | orchestrator | 2026-03-26 05:55:35.887979 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-26 05:55:35.887990 | orchestrator | Thursday 26 March 2026 05:55:34 +0000 (0:00:01.127) 0:52:58.025 ******** 2026-03-26 05:55:35.888000 | orchestrator | ok: [testbed-node-3] 2026-03-26 05:55:35.888011 | orchestrator | 2026-03-26 05:55:35.888022 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-26 05:55:35.888041 | orchestrator | Thursday 26 March 2026 05:55:35 +0000 (0:00:01.504) 0:52:59.530 ******** 2026-03-26 05:56:27.073252 | orchestrator | ok: [testbed-node-3] 2026-03-26 05:56:27.073385 | orchestrator | 2026-03-26 05:56:27.073404 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-26 05:56:27.073417 | orchestrator | Thursday 26 March 2026 05:55:37 +0000 (0:00:01.562) 0:53:01.093 ******** 2026-03-26 05:56:27.073428 | orchestrator | ok: [testbed-node-3] 2026-03-26 05:56:27.073439 | orchestrator | 2026-03-26 05:56:27.073450 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-26 05:56:27.073461 | orchestrator | Thursday 26 March 2026 05:55:38 +0000 (0:00:01.552) 0:53:02.646 ******** 2026-03-26 05:56:27.073472 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:56:27.073484 | orchestrator | 2026-03-26 05:56:27.073495 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-26 05:56:27.073505 | orchestrator | Thursday 26 March 2026 05:55:40 +0000 (0:00:01.131) 0:53:03.778 ******** 2026-03-26 05:56:27.073516 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:56:27.073527 | orchestrator | 2026-03-26 05:56:27.073537 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-26 05:56:27.073548 | orchestrator | Thursday 26 March 2026 05:55:41 +0000 (0:00:01.194) 0:53:04.972 ******** 2026-03-26 05:56:27.073558 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:56:27.073569 | orchestrator | 2026-03-26 05:56:27.073580 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-26 05:56:27.073590 | orchestrator | Thursday 26 March 2026 05:55:42 +0000 (0:00:01.121) 0:53:06.094 ******** 2026-03-26 05:56:27.073601 | orchestrator | ok: [testbed-node-3] 2026-03-26 05:56:27.073612 | orchestrator | 2026-03-26 05:56:27.073622 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-26 05:56:27.073716 | orchestrator | Thursday 26 March 2026 05:55:44 +0000 (0:00:01.588) 0:53:07.682 ******** 2026-03-26 05:56:27.073733 | orchestrator | ok: [testbed-node-3] 2026-03-26 05:56:27.073745 | orchestrator | 2026-03-26 05:56:27.073758 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-26 05:56:27.073793 | orchestrator | Thursday 26 March 2026 05:55:45 +0000 (0:00:01.590) 0:53:09.272 ******** 2026-03-26 05:56:27.073806 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:56:27.073819 | orchestrator | 2026-03-26 05:56:27.073831 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-26 05:56:27.073844 | orchestrator | Thursday 26 March 2026 05:55:46 +0000 (0:00:01.158) 0:53:10.431 ******** 2026-03-26 05:56:27.073856 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:56:27.073869 | orchestrator | 2026-03-26 05:56:27.073882 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-26 05:56:27.073894 | orchestrator | Thursday 26 March 2026 05:55:47 +0000 (0:00:01.226) 0:53:11.657 ******** 2026-03-26 05:56:27.073906 | orchestrator | ok: [testbed-node-3] 2026-03-26 05:56:27.073919 | orchestrator | 2026-03-26 05:56:27.073931 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-26 05:56:27.073944 | orchestrator | Thursday 26 March 2026 05:55:49 +0000 (0:00:01.186) 0:53:12.844 ******** 2026-03-26 05:56:27.073955 | orchestrator | ok: [testbed-node-3] 2026-03-26 05:56:27.073969 | orchestrator | 2026-03-26 05:56:27.073981 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-26 05:56:27.073992 | orchestrator | Thursday 26 March 2026 05:55:50 +0000 (0:00:01.126) 0:53:13.971 ******** 2026-03-26 05:56:27.074003 | orchestrator | ok: [testbed-node-3] 2026-03-26 05:56:27.074074 | orchestrator | 2026-03-26 05:56:27.074088 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-26 05:56:27.074099 | orchestrator | Thursday 26 March 2026 05:55:51 +0000 (0:00:01.182) 0:53:15.153 ******** 2026-03-26 05:56:27.074110 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:56:27.074120 | orchestrator | 2026-03-26 05:56:27.074131 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-26 05:56:27.074142 | orchestrator | Thursday 26 March 2026 05:55:52 +0000 (0:00:01.164) 0:53:16.317 ******** 2026-03-26 05:56:27.074152 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:56:27.074163 | orchestrator | 2026-03-26 05:56:27.074173 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-26 05:56:27.074184 | orchestrator | Thursday 26 March 2026 05:55:53 +0000 (0:00:01.201) 0:53:17.518 ******** 2026-03-26 05:56:27.074194 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:56:27.074205 | orchestrator | 2026-03-26 05:56:27.074216 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-26 05:56:27.074227 | orchestrator | Thursday 26 March 2026 05:55:55 +0000 (0:00:01.159) 0:53:18.678 ******** 2026-03-26 05:56:27.074237 | orchestrator | ok: [testbed-node-3] 2026-03-26 05:56:27.074248 | orchestrator | 2026-03-26 05:56:27.074259 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-26 05:56:27.074270 | orchestrator | Thursday 26 March 2026 05:55:56 +0000 (0:00:01.157) 0:53:19.836 ******** 2026-03-26 05:56:27.074280 | orchestrator | ok: [testbed-node-3] 2026-03-26 05:56:27.074291 | orchestrator | 2026-03-26 05:56:27.074302 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-03-26 05:56:27.074312 | orchestrator | Thursday 26 March 2026 05:55:57 +0000 (0:00:01.175) 0:53:21.011 ******** 2026-03-26 05:56:27.074323 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:56:27.074334 | orchestrator | 2026-03-26 05:56:27.074344 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-03-26 05:56:27.074355 | orchestrator | Thursday 26 March 2026 05:55:58 +0000 (0:00:01.145) 0:53:22.157 ******** 2026-03-26 05:56:27.074366 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:56:27.074377 | orchestrator | 2026-03-26 05:56:27.074387 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-03-26 05:56:27.074398 | orchestrator | Thursday 26 March 2026 05:55:59 +0000 (0:00:01.151) 0:53:23.309 ******** 2026-03-26 05:56:27.074409 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:56:27.074420 | orchestrator | 2026-03-26 05:56:27.074430 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-03-26 05:56:27.074450 | orchestrator | Thursday 26 March 2026 05:56:00 +0000 (0:00:01.158) 0:53:24.468 ******** 2026-03-26 05:56:27.074461 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:56:27.074472 | orchestrator | 2026-03-26 05:56:27.074483 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-03-26 05:56:27.074512 | orchestrator | Thursday 26 March 2026 05:56:01 +0000 (0:00:01.132) 0:53:25.601 ******** 2026-03-26 05:56:27.074524 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:56:27.074535 | orchestrator | 2026-03-26 05:56:27.074545 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-03-26 05:56:27.074556 | orchestrator | Thursday 26 March 2026 05:56:03 +0000 (0:00:01.237) 0:53:26.838 ******** 2026-03-26 05:56:27.074567 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:56:27.074578 | orchestrator | 2026-03-26 05:56:27.074588 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-03-26 05:56:27.074599 | orchestrator | Thursday 26 March 2026 05:56:04 +0000 (0:00:01.145) 0:53:27.983 ******** 2026-03-26 05:56:27.074610 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:56:27.074621 | orchestrator | 2026-03-26 05:56:27.074631 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-03-26 05:56:27.074643 | orchestrator | Thursday 26 March 2026 05:56:05 +0000 (0:00:01.191) 0:53:29.175 ******** 2026-03-26 05:56:27.074691 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:56:27.074703 | orchestrator | 2026-03-26 05:56:27.074713 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-03-26 05:56:27.074724 | orchestrator | Thursday 26 March 2026 05:56:06 +0000 (0:00:01.141) 0:53:30.317 ******** 2026-03-26 05:56:27.074734 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:56:27.074745 | orchestrator | 2026-03-26 05:56:27.074755 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-03-26 05:56:27.074766 | orchestrator | Thursday 26 March 2026 05:56:07 +0000 (0:00:01.202) 0:53:31.519 ******** 2026-03-26 05:56:27.074777 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:56:27.074787 | orchestrator | 2026-03-26 05:56:27.074805 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-03-26 05:56:27.074816 | orchestrator | Thursday 26 March 2026 05:56:09 +0000 (0:00:01.180) 0:53:32.699 ******** 2026-03-26 05:56:27.074826 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:56:27.074837 | orchestrator | 2026-03-26 05:56:27.074848 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-03-26 05:56:27.074858 | orchestrator | Thursday 26 March 2026 05:56:10 +0000 (0:00:01.135) 0:53:33.835 ******** 2026-03-26 05:56:27.074869 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:56:27.074880 | orchestrator | 2026-03-26 05:56:27.074890 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-26 05:56:27.074901 | orchestrator | Thursday 26 March 2026 05:56:11 +0000 (0:00:01.128) 0:53:34.964 ******** 2026-03-26 05:56:27.074911 | orchestrator | ok: [testbed-node-3] 2026-03-26 05:56:27.074922 | orchestrator | 2026-03-26 05:56:27.074933 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-26 05:56:27.074957 | orchestrator | Thursday 26 March 2026 05:56:13 +0000 (0:00:01.889) 0:53:36.853 ******** 2026-03-26 05:56:27.074979 | orchestrator | ok: [testbed-node-3] 2026-03-26 05:56:27.074990 | orchestrator | 2026-03-26 05:56:27.075000 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-26 05:56:27.075011 | orchestrator | Thursday 26 March 2026 05:56:15 +0000 (0:00:02.250) 0:53:39.104 ******** 2026-03-26 05:56:27.075021 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3 2026-03-26 05:56:27.075033 | orchestrator | 2026-03-26 05:56:27.075044 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-26 05:56:27.075054 | orchestrator | Thursday 26 March 2026 05:56:16 +0000 (0:00:01.120) 0:53:40.224 ******** 2026-03-26 05:56:27.075065 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:56:27.075076 | orchestrator | 2026-03-26 05:56:27.075086 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-26 05:56:27.075105 | orchestrator | Thursday 26 March 2026 05:56:17 +0000 (0:00:01.154) 0:53:41.378 ******** 2026-03-26 05:56:27.075115 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:56:27.075126 | orchestrator | 2026-03-26 05:56:27.075137 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-26 05:56:27.075147 | orchestrator | Thursday 26 March 2026 05:56:19 +0000 (0:00:01.304) 0:53:42.682 ******** 2026-03-26 05:56:27.075158 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-26 05:56:27.075168 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-26 05:56:27.075179 | orchestrator | 2026-03-26 05:56:27.075190 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-26 05:56:27.075200 | orchestrator | Thursday 26 March 2026 05:56:20 +0000 (0:00:01.869) 0:53:44.553 ******** 2026-03-26 05:56:27.075211 | orchestrator | ok: [testbed-node-3] 2026-03-26 05:56:27.075221 | orchestrator | 2026-03-26 05:56:27.075232 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-26 05:56:27.075243 | orchestrator | Thursday 26 March 2026 05:56:22 +0000 (0:00:01.497) 0:53:46.051 ******** 2026-03-26 05:56:27.075253 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:56:27.075264 | orchestrator | 2026-03-26 05:56:27.075274 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-26 05:56:27.075285 | orchestrator | Thursday 26 March 2026 05:56:23 +0000 (0:00:01.144) 0:53:47.196 ******** 2026-03-26 05:56:27.075295 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:56:27.075306 | orchestrator | 2026-03-26 05:56:27.075317 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-26 05:56:27.075328 | orchestrator | Thursday 26 March 2026 05:56:24 +0000 (0:00:01.155) 0:53:48.351 ******** 2026-03-26 05:56:27.075338 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:56:27.075349 | orchestrator | 2026-03-26 05:56:27.075359 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-26 05:56:27.075370 | orchestrator | Thursday 26 March 2026 05:56:25 +0000 (0:00:01.180) 0:53:49.532 ******** 2026-03-26 05:56:27.075381 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3 2026-03-26 05:56:27.075391 | orchestrator | 2026-03-26 05:56:27.075402 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-26 05:56:27.075420 | orchestrator | Thursday 26 March 2026 05:56:27 +0000 (0:00:01.190) 0:53:50.723 ******** 2026-03-26 05:57:15.058167 | orchestrator | ok: [testbed-node-3] 2026-03-26 05:57:15.058281 | orchestrator | 2026-03-26 05:57:15.058295 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-26 05:57:15.058307 | orchestrator | Thursday 26 March 2026 05:56:28 +0000 (0:00:01.815) 0:53:52.538 ******** 2026-03-26 05:57:15.058318 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-26 05:57:15.058329 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-26 05:57:15.058339 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-26 05:57:15.058349 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:57:15.058359 | orchestrator | 2026-03-26 05:57:15.058368 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-26 05:57:15.058378 | orchestrator | Thursday 26 March 2026 05:56:30 +0000 (0:00:01.166) 0:53:53.705 ******** 2026-03-26 05:57:15.058388 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:57:15.058397 | orchestrator | 2026-03-26 05:57:15.058407 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-26 05:57:15.058416 | orchestrator | Thursday 26 March 2026 05:56:31 +0000 (0:00:01.166) 0:53:54.871 ******** 2026-03-26 05:57:15.058426 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:57:15.058435 | orchestrator | 2026-03-26 05:57:15.058445 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-26 05:57:15.058479 | orchestrator | Thursday 26 March 2026 05:56:32 +0000 (0:00:01.187) 0:53:56.059 ******** 2026-03-26 05:57:15.058489 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:57:15.058499 | orchestrator | 2026-03-26 05:57:15.058522 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-26 05:57:15.058532 | orchestrator | Thursday 26 March 2026 05:56:33 +0000 (0:00:01.199) 0:53:57.258 ******** 2026-03-26 05:57:15.058541 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:57:15.058551 | orchestrator | 2026-03-26 05:57:15.058560 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-26 05:57:15.058569 | orchestrator | Thursday 26 March 2026 05:56:34 +0000 (0:00:01.391) 0:53:58.650 ******** 2026-03-26 05:57:15.058579 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:57:15.058588 | orchestrator | 2026-03-26 05:57:15.058598 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-26 05:57:15.058607 | orchestrator | Thursday 26 March 2026 05:56:36 +0000 (0:00:01.196) 0:53:59.847 ******** 2026-03-26 05:57:15.058616 | orchestrator | ok: [testbed-node-3] 2026-03-26 05:57:15.058627 | orchestrator | 2026-03-26 05:57:15.058661 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-26 05:57:15.058673 | orchestrator | Thursday 26 March 2026 05:56:38 +0000 (0:00:02.550) 0:54:02.398 ******** 2026-03-26 05:57:15.058684 | orchestrator | ok: [testbed-node-3] 2026-03-26 05:57:15.058696 | orchestrator | 2026-03-26 05:57:15.058707 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-26 05:57:15.058718 | orchestrator | Thursday 26 March 2026 05:56:39 +0000 (0:00:01.135) 0:54:03.533 ******** 2026-03-26 05:57:15.058729 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3 2026-03-26 05:57:15.058740 | orchestrator | 2026-03-26 05:57:15.058751 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-26 05:57:15.058762 | orchestrator | Thursday 26 March 2026 05:56:41 +0000 (0:00:01.155) 0:54:04.689 ******** 2026-03-26 05:57:15.058774 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:57:15.058785 | orchestrator | 2026-03-26 05:57:15.058796 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-26 05:57:15.058807 | orchestrator | Thursday 26 March 2026 05:56:42 +0000 (0:00:01.175) 0:54:05.865 ******** 2026-03-26 05:57:15.058816 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:57:15.058830 | orchestrator | 2026-03-26 05:57:15.058846 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-26 05:57:15.058862 | orchestrator | Thursday 26 March 2026 05:56:43 +0000 (0:00:01.150) 0:54:07.016 ******** 2026-03-26 05:57:15.058880 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:57:15.058890 | orchestrator | 2026-03-26 05:57:15.058900 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-26 05:57:15.058909 | orchestrator | Thursday 26 March 2026 05:56:44 +0000 (0:00:01.146) 0:54:08.162 ******** 2026-03-26 05:57:15.058919 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:57:15.058928 | orchestrator | 2026-03-26 05:57:15.058938 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-26 05:57:15.058947 | orchestrator | Thursday 26 March 2026 05:56:45 +0000 (0:00:01.101) 0:54:09.264 ******** 2026-03-26 05:57:15.058957 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:57:15.058966 | orchestrator | 2026-03-26 05:57:15.058975 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-26 05:57:15.058985 | orchestrator | Thursday 26 March 2026 05:56:46 +0000 (0:00:01.177) 0:54:10.442 ******** 2026-03-26 05:57:15.058994 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:57:15.059004 | orchestrator | 2026-03-26 05:57:15.059013 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-26 05:57:15.059022 | orchestrator | Thursday 26 March 2026 05:56:47 +0000 (0:00:01.186) 0:54:11.629 ******** 2026-03-26 05:57:15.059032 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:57:15.059041 | orchestrator | 2026-03-26 05:57:15.059059 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-26 05:57:15.059069 | orchestrator | Thursday 26 March 2026 05:56:49 +0000 (0:00:01.124) 0:54:12.754 ******** 2026-03-26 05:57:15.059078 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:57:15.059088 | orchestrator | 2026-03-26 05:57:15.059097 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-26 05:57:15.059107 | orchestrator | Thursday 26 March 2026 05:56:50 +0000 (0:00:01.228) 0:54:13.982 ******** 2026-03-26 05:57:15.059116 | orchestrator | ok: [testbed-node-3] 2026-03-26 05:57:15.059126 | orchestrator | 2026-03-26 05:57:15.059135 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-26 05:57:15.059161 | orchestrator | Thursday 26 March 2026 05:56:51 +0000 (0:00:01.151) 0:54:15.133 ******** 2026-03-26 05:57:15.059171 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3 2026-03-26 05:57:15.059181 | orchestrator | 2026-03-26 05:57:15.059191 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-26 05:57:15.059200 | orchestrator | Thursday 26 March 2026 05:56:52 +0000 (0:00:01.127) 0:54:16.261 ******** 2026-03-26 05:57:15.059210 | orchestrator | ok: [testbed-node-3] => (item=/etc/ceph) 2026-03-26 05:57:15.059220 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/) 2026-03-26 05:57:15.059229 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-03-26 05:57:15.059239 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-03-26 05:57:15.059248 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-03-26 05:57:15.059258 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-03-26 05:57:15.059267 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-03-26 05:57:15.059276 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-03-26 05:57:15.059286 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-26 05:57:15.059295 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-26 05:57:15.059305 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-26 05:57:15.059314 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-26 05:57:15.059324 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-26 05:57:15.059338 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-26 05:57:15.059348 | orchestrator | ok: [testbed-node-3] => (item=/var/run/ceph) 2026-03-26 05:57:15.059357 | orchestrator | ok: [testbed-node-3] => (item=/var/log/ceph) 2026-03-26 05:57:15.059367 | orchestrator | 2026-03-26 05:57:15.059376 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-26 05:57:15.059385 | orchestrator | Thursday 26 March 2026 05:56:59 +0000 (0:00:06.553) 0:54:22.815 ******** 2026-03-26 05:57:15.059395 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3 2026-03-26 05:57:15.059404 | orchestrator | 2026-03-26 05:57:15.059414 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-03-26 05:57:15.059423 | orchestrator | Thursday 26 March 2026 05:57:00 +0000 (0:00:01.161) 0:54:23.976 ******** 2026-03-26 05:57:15.059433 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-26 05:57:15.059443 | orchestrator | 2026-03-26 05:57:15.059453 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-03-26 05:57:15.059462 | orchestrator | Thursday 26 March 2026 05:57:01 +0000 (0:00:01.498) 0:54:25.474 ******** 2026-03-26 05:57:15.059472 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-26 05:57:15.059481 | orchestrator | 2026-03-26 05:57:15.059491 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-26 05:57:15.059500 | orchestrator | Thursday 26 March 2026 05:57:04 +0000 (0:00:03.000) 0:54:28.475 ******** 2026-03-26 05:57:15.059516 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:57:15.059525 | orchestrator | 2026-03-26 05:57:15.059535 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-26 05:57:15.059545 | orchestrator | Thursday 26 March 2026 05:57:05 +0000 (0:00:01.122) 0:54:29.598 ******** 2026-03-26 05:57:15.059554 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:57:15.059564 | orchestrator | 2026-03-26 05:57:15.059573 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-26 05:57:15.059583 | orchestrator | Thursday 26 March 2026 05:57:07 +0000 (0:00:01.118) 0:54:30.717 ******** 2026-03-26 05:57:15.059592 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:57:15.059602 | orchestrator | 2026-03-26 05:57:15.059611 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-26 05:57:15.059620 | orchestrator | Thursday 26 March 2026 05:57:08 +0000 (0:00:01.153) 0:54:31.871 ******** 2026-03-26 05:57:15.059630 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:57:15.059670 | orchestrator | 2026-03-26 05:57:15.059681 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-26 05:57:15.059690 | orchestrator | Thursday 26 March 2026 05:57:09 +0000 (0:00:01.214) 0:54:33.085 ******** 2026-03-26 05:57:15.059699 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:57:15.059709 | orchestrator | 2026-03-26 05:57:15.059719 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-26 05:57:15.059728 | orchestrator | Thursday 26 March 2026 05:57:10 +0000 (0:00:01.139) 0:54:34.224 ******** 2026-03-26 05:57:15.059738 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:57:15.059747 | orchestrator | 2026-03-26 05:57:15.059757 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-26 05:57:15.059766 | orchestrator | Thursday 26 March 2026 05:57:11 +0000 (0:00:01.126) 0:54:35.351 ******** 2026-03-26 05:57:15.059776 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:57:15.059786 | orchestrator | 2026-03-26 05:57:15.059795 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-26 05:57:15.059805 | orchestrator | Thursday 26 March 2026 05:57:12 +0000 (0:00:01.113) 0:54:36.465 ******** 2026-03-26 05:57:15.059814 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:57:15.059824 | orchestrator | 2026-03-26 05:57:15.059833 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-26 05:57:15.059843 | orchestrator | Thursday 26 March 2026 05:57:13 +0000 (0:00:01.106) 0:54:37.572 ******** 2026-03-26 05:57:15.059853 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:57:15.059862 | orchestrator | 2026-03-26 05:57:15.059878 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-26 05:58:11.109068 | orchestrator | Thursday 26 March 2026 05:57:15 +0000 (0:00:01.128) 0:54:38.701 ******** 2026-03-26 05:58:11.109186 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:58:11.109203 | orchestrator | 2026-03-26 05:58:11.109216 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-26 05:58:11.109227 | orchestrator | Thursday 26 March 2026 05:57:16 +0000 (0:00:01.172) 0:54:39.873 ******** 2026-03-26 05:58:11.109239 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:58:11.109250 | orchestrator | 2026-03-26 05:58:11.109261 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-26 05:58:11.109272 | orchestrator | Thursday 26 March 2026 05:57:17 +0000 (0:00:01.211) 0:54:41.085 ******** 2026-03-26 05:58:11.109283 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-03-26 05:58:11.109293 | orchestrator | 2026-03-26 05:58:11.109304 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-26 05:58:11.109315 | orchestrator | Thursday 26 March 2026 05:57:21 +0000 (0:00:04.492) 0:54:45.578 ******** 2026-03-26 05:58:11.109326 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-26 05:58:11.109361 | orchestrator | 2026-03-26 05:58:11.109372 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-26 05:58:11.109383 | orchestrator | Thursday 26 March 2026 05:57:23 +0000 (0:00:01.201) 0:54:46.779 ******** 2026-03-26 05:58:11.109412 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}]) 2026-03-26 05:58:11.109435 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}]) 2026-03-26 05:58:11.109455 | orchestrator | 2026-03-26 05:58:11.109472 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-26 05:58:11.109489 | orchestrator | Thursday 26 March 2026 05:57:27 +0000 (0:00:04.801) 0:54:51.581 ******** 2026-03-26 05:58:11.109506 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:58:11.109523 | orchestrator | 2026-03-26 05:58:11.109541 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-26 05:58:11.109559 | orchestrator | Thursday 26 March 2026 05:57:29 +0000 (0:00:01.133) 0:54:52.714 ******** 2026-03-26 05:58:11.109579 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:58:11.109598 | orchestrator | 2026-03-26 05:58:11.109617 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-26 05:58:11.109663 | orchestrator | Thursday 26 March 2026 05:57:30 +0000 (0:00:01.110) 0:54:53.825 ******** 2026-03-26 05:58:11.109676 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:58:11.109688 | orchestrator | 2026-03-26 05:58:11.109700 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-26 05:58:11.109712 | orchestrator | Thursday 26 March 2026 05:57:31 +0000 (0:00:01.222) 0:54:55.048 ******** 2026-03-26 05:58:11.109724 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:58:11.109736 | orchestrator | 2026-03-26 05:58:11.109749 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-26 05:58:11.109761 | orchestrator | Thursday 26 March 2026 05:57:32 +0000 (0:00:01.180) 0:54:56.228 ******** 2026-03-26 05:58:11.109772 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:58:11.109785 | orchestrator | 2026-03-26 05:58:11.109795 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-26 05:58:11.109806 | orchestrator | Thursday 26 March 2026 05:57:33 +0000 (0:00:01.184) 0:54:57.413 ******** 2026-03-26 05:58:11.109816 | orchestrator | ok: [testbed-node-3] 2026-03-26 05:58:11.109828 | orchestrator | 2026-03-26 05:58:11.109838 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-26 05:58:11.109849 | orchestrator | Thursday 26 March 2026 05:57:34 +0000 (0:00:01.233) 0:54:58.647 ******** 2026-03-26 05:58:11.109860 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-26 05:58:11.109870 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-26 05:58:11.109881 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-26 05:58:11.109892 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:58:11.109902 | orchestrator | 2026-03-26 05:58:11.109913 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-26 05:58:11.109924 | orchestrator | Thursday 26 March 2026 05:57:36 +0000 (0:00:01.520) 0:55:00.168 ******** 2026-03-26 05:58:11.109935 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-26 05:58:11.109945 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-26 05:58:11.109956 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-26 05:58:11.109977 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:58:11.109988 | orchestrator | 2026-03-26 05:58:11.109998 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-26 05:58:11.110009 | orchestrator | Thursday 26 March 2026 05:57:37 +0000 (0:00:01.450) 0:55:01.619 ******** 2026-03-26 05:58:11.110145 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-26 05:58:11.110165 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-26 05:58:11.110182 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-26 05:58:11.110221 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:58:11.110241 | orchestrator | 2026-03-26 05:58:11.110260 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-26 05:58:11.110279 | orchestrator | Thursday 26 March 2026 05:57:39 +0000 (0:00:01.470) 0:55:03.089 ******** 2026-03-26 05:58:11.110297 | orchestrator | ok: [testbed-node-3] 2026-03-26 05:58:11.110315 | orchestrator | 2026-03-26 05:58:11.110326 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-26 05:58:11.110337 | orchestrator | Thursday 26 March 2026 05:57:40 +0000 (0:00:01.175) 0:55:04.265 ******** 2026-03-26 05:58:11.110348 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-26 05:58:11.110358 | orchestrator | 2026-03-26 05:58:11.110369 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-26 05:58:11.110379 | orchestrator | Thursday 26 March 2026 05:57:41 +0000 (0:00:01.384) 0:55:05.650 ******** 2026-03-26 05:58:11.110390 | orchestrator | ok: [testbed-node-3] 2026-03-26 05:58:11.110401 | orchestrator | 2026-03-26 05:58:11.110411 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-03-26 05:58:11.110422 | orchestrator | Thursday 26 March 2026 05:57:43 +0000 (0:00:01.845) 0:55:07.495 ******** 2026-03-26 05:58:11.110433 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:58:11.110443 | orchestrator | 2026-03-26 05:58:11.110454 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-03-26 05:58:11.110465 | orchestrator | Thursday 26 March 2026 05:57:44 +0000 (0:00:01.154) 0:55:08.649 ******** 2026-03-26 05:58:11.110485 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3 2026-03-26 05:58:11.110496 | orchestrator | 2026-03-26 05:58:11.110506 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-03-26 05:58:11.110517 | orchestrator | Thursday 26 March 2026 05:57:46 +0000 (0:00:01.819) 0:55:10.469 ******** 2026-03-26 05:58:11.110527 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-26 05:58:11.110538 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-03-26 05:58:11.110548 | orchestrator | 2026-03-26 05:58:11.110559 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-03-26 05:58:11.110570 | orchestrator | Thursday 26 March 2026 05:57:48 +0000 (0:00:01.987) 0:55:12.457 ******** 2026-03-26 05:58:11.110580 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-26 05:58:11.110591 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-26 05:58:11.110601 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-26 05:58:11.110612 | orchestrator | 2026-03-26 05:58:11.110649 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-03-26 05:58:11.110663 | orchestrator | Thursday 26 March 2026 05:57:51 +0000 (0:00:03.181) 0:55:15.639 ******** 2026-03-26 05:58:11.110674 | orchestrator | ok: [testbed-node-3] => (item=None) 2026-03-26 05:58:11.110685 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-26 05:58:11.110695 | orchestrator | ok: [testbed-node-3] 2026-03-26 05:58:11.110706 | orchestrator | 2026-03-26 05:58:11.110717 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-03-26 05:58:11.110727 | orchestrator | Thursday 26 March 2026 05:57:53 +0000 (0:00:01.946) 0:55:17.585 ******** 2026-03-26 05:58:11.110738 | orchestrator | ok: [testbed-node-3] 2026-03-26 05:58:11.110759 | orchestrator | 2026-03-26 05:58:11.110770 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-03-26 05:58:11.110781 | orchestrator | Thursday 26 March 2026 05:57:55 +0000 (0:00:01.579) 0:55:19.165 ******** 2026-03-26 05:58:11.110791 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:58:11.110802 | orchestrator | 2026-03-26 05:58:11.110812 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-03-26 05:58:11.110823 | orchestrator | Thursday 26 March 2026 05:57:56 +0000 (0:00:01.149) 0:55:20.314 ******** 2026-03-26 05:58:11.110833 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3 2026-03-26 05:58:11.110845 | orchestrator | 2026-03-26 05:58:11.110856 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-03-26 05:58:11.110866 | orchestrator | Thursday 26 March 2026 05:57:58 +0000 (0:00:01.552) 0:55:21.867 ******** 2026-03-26 05:58:11.110877 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3 2026-03-26 05:58:11.110887 | orchestrator | 2026-03-26 05:58:11.110898 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-03-26 05:58:11.110909 | orchestrator | Thursday 26 March 2026 05:57:59 +0000 (0:00:01.576) 0:55:23.444 ******** 2026-03-26 05:58:11.110919 | orchestrator | ok: [testbed-node-3] 2026-03-26 05:58:11.110930 | orchestrator | 2026-03-26 05:58:11.110940 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-03-26 05:58:11.110951 | orchestrator | Thursday 26 March 2026 05:58:01 +0000 (0:00:01.996) 0:55:25.440 ******** 2026-03-26 05:58:11.110962 | orchestrator | ok: [testbed-node-3] 2026-03-26 05:58:11.110972 | orchestrator | 2026-03-26 05:58:11.110983 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-03-26 05:58:11.110993 | orchestrator | Thursday 26 March 2026 05:58:03 +0000 (0:00:01.894) 0:55:27.335 ******** 2026-03-26 05:58:11.111004 | orchestrator | ok: [testbed-node-3] 2026-03-26 05:58:11.111014 | orchestrator | 2026-03-26 05:58:11.111025 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-03-26 05:58:11.111035 | orchestrator | Thursday 26 March 2026 05:58:05 +0000 (0:00:02.290) 0:55:29.625 ******** 2026-03-26 05:58:11.111046 | orchestrator | ok: [testbed-node-3] 2026-03-26 05:58:11.111056 | orchestrator | 2026-03-26 05:58:11.111067 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-03-26 05:58:11.111078 | orchestrator | Thursday 26 March 2026 05:58:08 +0000 (0:00:02.324) 0:55:31.950 ******** 2026-03-26 05:58:11.111089 | orchestrator | ok: [testbed-node-3] 2026-03-26 05:58:11.111099 | orchestrator | 2026-03-26 05:58:11.111110 | orchestrator | TASK [Restart ceph mds] ******************************************************** 2026-03-26 05:58:11.111121 | orchestrator | Thursday 26 March 2026 05:58:09 +0000 (0:00:01.635) 0:55:33.586 ******** 2026-03-26 05:58:11.111140 | orchestrator | skipping: [testbed-node-3] 2026-03-26 05:58:40.157296 | orchestrator | 2026-03-26 05:58:40.157375 | orchestrator | TASK [Restart active mds] ****************************************************** 2026-03-26 05:58:40.157382 | orchestrator | Thursday 26 March 2026 05:58:11 +0000 (0:00:01.165) 0:55:34.751 ******** 2026-03-26 05:58:40.157387 | orchestrator | ok: [testbed-node-3] 2026-03-26 05:58:40.157393 | orchestrator | 2026-03-26 05:58:40.157397 | orchestrator | PLAY [Upgrade standbys ceph mdss cluster] ************************************** 2026-03-26 05:58:40.157402 | orchestrator | 2026-03-26 05:58:40.157407 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-26 05:58:40.157412 | orchestrator | Thursday 26 March 2026 05:58:15 +0000 (0:00:04.539) 0:55:39.291 ******** 2026-03-26 05:58:40.157416 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-4, testbed-node-5 2026-03-26 05:58:40.157422 | orchestrator | 2026-03-26 05:58:40.157426 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-26 05:58:40.157431 | orchestrator | Thursday 26 March 2026 05:58:17 +0000 (0:00:01.442) 0:55:40.734 ******** 2026-03-26 05:58:40.157435 | orchestrator | ok: [testbed-node-4] 2026-03-26 05:58:40.157440 | orchestrator | ok: [testbed-node-5] 2026-03-26 05:58:40.157460 | orchestrator | 2026-03-26 05:58:40.157465 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-26 05:58:40.157469 | orchestrator | Thursday 26 March 2026 05:58:18 +0000 (0:00:01.673) 0:55:42.407 ******** 2026-03-26 05:58:40.157473 | orchestrator | ok: [testbed-node-4] 2026-03-26 05:58:40.157478 | orchestrator | ok: [testbed-node-5] 2026-03-26 05:58:40.157482 | orchestrator | 2026-03-26 05:58:40.157497 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-26 05:58:40.157502 | orchestrator | Thursday 26 March 2026 05:58:20 +0000 (0:00:01.527) 0:55:43.936 ******** 2026-03-26 05:58:40.157507 | orchestrator | ok: [testbed-node-4] 2026-03-26 05:58:40.157511 | orchestrator | ok: [testbed-node-5] 2026-03-26 05:58:40.157515 | orchestrator | 2026-03-26 05:58:40.157520 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-26 05:58:40.157524 | orchestrator | Thursday 26 March 2026 05:58:21 +0000 (0:00:01.530) 0:55:45.467 ******** 2026-03-26 05:58:40.157529 | orchestrator | ok: [testbed-node-4] 2026-03-26 05:58:40.157533 | orchestrator | ok: [testbed-node-5] 2026-03-26 05:58:40.157538 | orchestrator | 2026-03-26 05:58:40.157542 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-26 05:58:40.157547 | orchestrator | Thursday 26 March 2026 05:58:23 +0000 (0:00:01.410) 0:55:46.878 ******** 2026-03-26 05:58:40.157551 | orchestrator | ok: [testbed-node-4] 2026-03-26 05:58:40.157555 | orchestrator | ok: [testbed-node-5] 2026-03-26 05:58:40.157560 | orchestrator | 2026-03-26 05:58:40.157564 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-26 05:58:40.157569 | orchestrator | Thursday 26 March 2026 05:58:24 +0000 (0:00:01.338) 0:55:48.217 ******** 2026-03-26 05:58:40.157573 | orchestrator | ok: [testbed-node-4] 2026-03-26 05:58:40.157578 | orchestrator | ok: [testbed-node-5] 2026-03-26 05:58:40.157582 | orchestrator | 2026-03-26 05:58:40.157587 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-26 05:58:40.157591 | orchestrator | Thursday 26 March 2026 05:58:25 +0000 (0:00:01.342) 0:55:49.559 ******** 2026-03-26 05:58:40.157596 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:58:40.157601 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:58:40.157606 | orchestrator | 2026-03-26 05:58:40.157610 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-26 05:58:40.157655 | orchestrator | Thursday 26 March 2026 05:58:27 +0000 (0:00:01.272) 0:55:50.832 ******** 2026-03-26 05:58:40.157660 | orchestrator | ok: [testbed-node-4] 2026-03-26 05:58:40.157665 | orchestrator | ok: [testbed-node-5] 2026-03-26 05:58:40.157669 | orchestrator | 2026-03-26 05:58:40.157674 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-26 05:58:40.157678 | orchestrator | Thursday 26 March 2026 05:58:28 +0000 (0:00:01.355) 0:55:52.187 ******** 2026-03-26 05:58:40.157683 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-26 05:58:40.157687 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-26 05:58:40.157692 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-26 05:58:40.157696 | orchestrator | 2026-03-26 05:58:40.157701 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-26 05:58:40.157705 | orchestrator | Thursday 26 March 2026 05:58:30 +0000 (0:00:01.742) 0:55:53.930 ******** 2026-03-26 05:58:40.157709 | orchestrator | ok: [testbed-node-4] 2026-03-26 05:58:40.157714 | orchestrator | ok: [testbed-node-5] 2026-03-26 05:58:40.157718 | orchestrator | 2026-03-26 05:58:40.157723 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-26 05:58:40.157727 | orchestrator | Thursday 26 March 2026 05:58:31 +0000 (0:00:01.360) 0:55:55.291 ******** 2026-03-26 05:58:40.157732 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-26 05:58:40.157736 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-26 05:58:40.157745 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-26 05:58:40.157750 | orchestrator | 2026-03-26 05:58:40.157754 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-26 05:58:40.157758 | orchestrator | Thursday 26 March 2026 05:58:34 +0000 (0:00:03.016) 0:55:58.308 ******** 2026-03-26 05:58:40.157763 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-26 05:58:40.157767 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-26 05:58:40.157772 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-26 05:58:40.157776 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:58:40.157781 | orchestrator | 2026-03-26 05:58:40.157785 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-26 05:58:40.157790 | orchestrator | Thursday 26 March 2026 05:58:36 +0000 (0:00:01.430) 0:55:59.738 ******** 2026-03-26 05:58:40.157806 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-26 05:58:40.157814 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-26 05:58:40.157819 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-26 05:58:40.157823 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:58:40.157828 | orchestrator | 2026-03-26 05:58:40.157832 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-26 05:58:40.157837 | orchestrator | Thursday 26 March 2026 05:58:37 +0000 (0:00:01.644) 0:56:01.382 ******** 2026-03-26 05:58:40.157847 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-26 05:58:40.157854 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-26 05:58:40.157859 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-26 05:58:40.157865 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:58:40.157870 | orchestrator | 2026-03-26 05:58:40.157875 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-26 05:58:40.157880 | orchestrator | Thursday 26 March 2026 05:58:38 +0000 (0:00:01.165) 0:56:02.548 ******** 2026-03-26 05:58:40.157887 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': 'de9c3b4c4c57', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-26 05:58:32.222055', 'end': '2026-03-26 05:58:32.279175', 'delta': '0:00:00.057120', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['de9c3b4c4c57'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-26 05:58:40.157898 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': 'd66b87272f8e', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-26 05:58:32.799939', 'end': '2026-03-26 05:58:32.846328', 'delta': '0:00:00.046389', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['d66b87272f8e'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-26 05:58:40.157909 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': 'b850f8fd4697', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-26 05:58:33.349459', 'end': '2026-03-26 05:58:33.398492', 'delta': '0:00:00.049033', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['b850f8fd4697'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-26 05:58:59.703996 | orchestrator | 2026-03-26 05:58:59.704101 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-26 05:58:59.704116 | orchestrator | Thursday 26 March 2026 05:58:40 +0000 (0:00:01.255) 0:56:03.803 ******** 2026-03-26 05:58:59.704127 | orchestrator | ok: [testbed-node-4] 2026-03-26 05:58:59.704137 | orchestrator | ok: [testbed-node-5] 2026-03-26 05:58:59.704147 | orchestrator | 2026-03-26 05:58:59.704157 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-26 05:58:59.704167 | orchestrator | Thursday 26 March 2026 05:58:41 +0000 (0:00:01.450) 0:56:05.254 ******** 2026-03-26 05:58:59.704176 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:58:59.704187 | orchestrator | 2026-03-26 05:58:59.704196 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-26 05:58:59.704220 | orchestrator | Thursday 26 March 2026 05:58:42 +0000 (0:00:01.250) 0:56:06.505 ******** 2026-03-26 05:58:59.704230 | orchestrator | ok: [testbed-node-4] 2026-03-26 05:58:59.704240 | orchestrator | ok: [testbed-node-5] 2026-03-26 05:58:59.704249 | orchestrator | 2026-03-26 05:58:59.704259 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-26 05:58:59.704273 | orchestrator | Thursday 26 March 2026 05:58:44 +0000 (0:00:01.373) 0:56:07.878 ******** 2026-03-26 05:58:59.704290 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-03-26 05:58:59.704308 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-26 05:58:59.704325 | orchestrator | 2026-03-26 05:58:59.704342 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-26 05:58:59.704382 | orchestrator | Thursday 26 March 2026 05:58:46 +0000 (0:00:02.318) 0:56:10.196 ******** 2026-03-26 05:58:59.704392 | orchestrator | ok: [testbed-node-4] 2026-03-26 05:58:59.704402 | orchestrator | ok: [testbed-node-5] 2026-03-26 05:58:59.704411 | orchestrator | 2026-03-26 05:58:59.704420 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-26 05:58:59.704452 | orchestrator | Thursday 26 March 2026 05:58:47 +0000 (0:00:01.273) 0:56:11.470 ******** 2026-03-26 05:58:59.704462 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:58:59.704472 | orchestrator | 2026-03-26 05:58:59.704481 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-26 05:58:59.704491 | orchestrator | Thursday 26 March 2026 05:58:48 +0000 (0:00:01.155) 0:56:12.626 ******** 2026-03-26 05:58:59.704500 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:58:59.704509 | orchestrator | 2026-03-26 05:58:59.704518 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-26 05:58:59.704527 | orchestrator | Thursday 26 March 2026 05:58:50 +0000 (0:00:01.222) 0:56:13.848 ******** 2026-03-26 05:58:59.704537 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:58:59.704548 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:58:59.704559 | orchestrator | 2026-03-26 05:58:59.704569 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-26 05:58:59.704580 | orchestrator | Thursday 26 March 2026 05:58:51 +0000 (0:00:01.240) 0:56:15.088 ******** 2026-03-26 05:58:59.704591 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:58:59.704601 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:58:59.704677 | orchestrator | 2026-03-26 05:58:59.704691 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-26 05:58:59.704703 | orchestrator | Thursday 26 March 2026 05:58:52 +0000 (0:00:01.242) 0:56:16.331 ******** 2026-03-26 05:58:59.704714 | orchestrator | ok: [testbed-node-4] 2026-03-26 05:58:59.704724 | orchestrator | ok: [testbed-node-5] 2026-03-26 05:58:59.704735 | orchestrator | 2026-03-26 05:58:59.704746 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-26 05:58:59.704757 | orchestrator | Thursday 26 March 2026 05:58:53 +0000 (0:00:01.320) 0:56:17.652 ******** 2026-03-26 05:58:59.704768 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:58:59.704779 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:58:59.704790 | orchestrator | 2026-03-26 05:58:59.704801 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-26 05:58:59.704811 | orchestrator | Thursday 26 March 2026 05:58:55 +0000 (0:00:01.617) 0:56:19.269 ******** 2026-03-26 05:58:59.704822 | orchestrator | ok: [testbed-node-4] 2026-03-26 05:58:59.704833 | orchestrator | ok: [testbed-node-5] 2026-03-26 05:58:59.704844 | orchestrator | 2026-03-26 05:58:59.704855 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-26 05:58:59.704866 | orchestrator | Thursday 26 March 2026 05:58:56 +0000 (0:00:01.279) 0:56:20.549 ******** 2026-03-26 05:58:59.704877 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:58:59.704888 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:58:59.704898 | orchestrator | 2026-03-26 05:58:59.704908 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-26 05:58:59.704918 | orchestrator | Thursday 26 March 2026 05:58:58 +0000 (0:00:01.264) 0:56:21.814 ******** 2026-03-26 05:58:59.704928 | orchestrator | ok: [testbed-node-4] 2026-03-26 05:58:59.704937 | orchestrator | ok: [testbed-node-5] 2026-03-26 05:58:59.704947 | orchestrator | 2026-03-26 05:58:59.704957 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-26 05:58:59.704966 | orchestrator | Thursday 26 March 2026 05:58:59 +0000 (0:00:01.314) 0:56:23.128 ******** 2026-03-26 05:58:59.704979 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:58:59.705013 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--b5eee7c3--8883--5bbe--be5a--75726e822543-osd--block--b5eee7c3--8883--5bbe--be5a--75726e822543', 'dm-uuid-LVM-O1aEkSX5V2TgXKGnqX2peNd9dQhi04NAZJyEqlgfRLjtJKN8JwRgDI1ZPO4R3wgt'], 'uuids': ['1d39f6c5-1f6c-4630-99cd-a410ca5e45d8'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'a52ec37c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['ZJyEql-gfRL-jtJK-N8Jw-RgDI-1ZPO-4R3wgt']}})  2026-03-26 05:58:59.705042 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7e352b46-e023-45cf-8a88-51cc46240a44', 'scsi-SQEMU_QEMU_HARDDISK_7e352b46-e023-45cf-8a88-51cc46240a44'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '7e352b46', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-26 05:58:59.705054 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-eoBjP8-dDdJ-3FQm-pH7P-5B72-c1L3-mABWfX', 'scsi-0QEMU_QEMU_HARDDISK_7db5f133-fe7b-42a4-ad57-b076dc1856ab', 'scsi-SQEMU_QEMU_HARDDISK_7db5f133-fe7b-42a4-ad57-b076dc1856ab'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '7db5f133', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--a652979e--9f40--503a--bbc8--6de5e605991e-osd--block--a652979e--9f40--503a--bbc8--6de5e605991e']}})  2026-03-26 05:58:59.705065 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:58:59.705076 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:58:59.705087 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-26-01-38-09-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-26 05:58:59.705098 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:58:59.705116 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-2h2U7S-LeR6-PGwr-u1xY-81U9-rrCs-8siESG', 'dm-uuid-CRYPT-LUKS2-741ece0a80b8415aa2e2dcc695db5f53-2h2U7S-LeR6-PGwr-u1xY-81U9-rrCs-8siESG'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-26 05:58:59.847809 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:58:59.847931 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--a652979e--9f40--503a--bbc8--6de5e605991e-osd--block--a652979e--9f40--503a--bbc8--6de5e605991e', 'dm-uuid-LVM-86WEu6duX2Pejl3asW6viK3fsh4aqvqg2h2U7SLeR6PGwru1xY81U9rrCs8siESG'], 'uuids': ['741ece0a-80b8-415a-a2e2-dcc695db5f53'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '7db5f133', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['2h2U7S-LeR6-PGwr-u1xY-81U9-rrCs-8siESG']}})  2026-03-26 05:58:59.847949 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-Oy69b4-OcVV-F2KD-vi5G-C8ns-n3Cu-1PhYTB', 'scsi-0QEMU_QEMU_HARDDISK_a52ec37c-b4ea-4f83-9b16-3c0f6ce85263', 'scsi-SQEMU_QEMU_HARDDISK_a52ec37c-b4ea-4f83-9b16-3c0f6ce85263'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'a52ec37c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--b5eee7c3--8883--5bbe--be5a--75726e822543-osd--block--b5eee7c3--8883--5bbe--be5a--75726e822543']}})  2026-03-26 05:58:59.847962 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:58:59.848004 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48d73a84-835d-480a-92c3-3edf7ed142ea', 'scsi-SQEMU_QEMU_HARDDISK_48d73a84-835d-480a-92c3-3edf7ed142ea'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '48d73a84', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48d73a84-835d-480a-92c3-3edf7ed142ea-part16', 'scsi-SQEMU_QEMU_HARDDISK_48d73a84-835d-480a-92c3-3edf7ed142ea-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48d73a84-835d-480a-92c3-3edf7ed142ea-part14', 'scsi-SQEMU_QEMU_HARDDISK_48d73a84-835d-480a-92c3-3edf7ed142ea-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48d73a84-835d-480a-92c3-3edf7ed142ea-part15', 'scsi-SQEMU_QEMU_HARDDISK_48d73a84-835d-480a-92c3-3edf7ed142ea-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48d73a84-835d-480a-92c3-3edf7ed142ea-part1', 'scsi-SQEMU_QEMU_HARDDISK_48d73a84-835d-480a-92c3-3edf7ed142ea-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-26 05:58:59.848040 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:58:59.848053 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:58:59.848065 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ZJyEql-gfRL-jtJK-N8Jw-RgDI-1ZPO-4R3wgt', 'dm-uuid-CRYPT-LUKS2-1d39f6c51f6c463099cda410ca5e45d8-ZJyEql-gfRL-jtJK-N8Jw-RgDI-1ZPO-4R3wgt'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-26 05:58:59.848077 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:58:59.848089 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:58:59.848103 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--1fd8de68--da37--5e01--9bf2--5a04fcdcd771-osd--block--1fd8de68--da37--5e01--9bf2--5a04fcdcd771', 'dm-uuid-LVM-Q7trkX6T9bQrenPM1EuezeEWG2QB7ffx0bNZRnQ3R81VwJTdPWktYtRAGSsXVFlp'], 'uuids': ['958c3d71-9b3b-484b-8cbf-f174ba1f6fac'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '47760649', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['0bNZRn-Q3R8-1VwJ-TdPW-ktYt-RAGS-sXVFlp']}})  2026-03-26 05:58:59.848114 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8ddd7966-84e6-4951-8a08-7b4fb4af2bd2', 'scsi-SQEMU_QEMU_HARDDISK_8ddd7966-84e6-4951-8a08-7b4fb4af2bd2'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '8ddd7966', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-26 05:58:59.848146 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-FriUOI-gUEr-kmP0-nYC7-MoO0-ng3W-Ej90o7', 'scsi-0QEMU_QEMU_HARDDISK_943c088c-5b56-4173-ab64-ec81e1cc816d', 'scsi-SQEMU_QEMU_HARDDISK_943c088c-5b56-4173-ab64-ec81e1cc816d'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '943c088c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--83c4def8--4703--5f7c--9549--7666ff9f2b66-osd--block--83c4def8--4703--5f7c--9549--7666ff9f2b66']}})  2026-03-26 05:59:00.986928 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:59:00.987035 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:59:00.987052 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-26-01-38-15-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-26 05:59:00.987067 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:59:00.987080 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-A6zrFI-A863-mhHt-Ip5p-UFeD-Hxho-mhuceD', 'dm-uuid-CRYPT-LUKS2-4b88786507c84424981e8c33baf61cbe-A6zrFI-A863-mhHt-Ip5p-UFeD-Hxho-mhuceD'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-26 05:59:00.987091 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:59:00.987127 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--83c4def8--4703--5f7c--9549--7666ff9f2b66-osd--block--83c4def8--4703--5f7c--9549--7666ff9f2b66', 'dm-uuid-LVM-DoNgv1c108dy4eu1pvS7TOCWbuA3UXv0A6zrFIA863mhHtIp5pUFeDHxhomhuceD'], 'uuids': ['4b887865-07c8-4424-981e-8c33baf61cbe'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '943c088c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['A6zrFI-A863-mhHt-Ip5p-UFeD-Hxho-mhuceD']}})  2026-03-26 05:59:00.987173 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-xgZSV6-0wfE-zGZo-XmXe-xuiN-RWM0-U4VPgB', 'scsi-0QEMU_QEMU_HARDDISK_47760649-09e9-4ed8-8303-e5ee473a8102', 'scsi-SQEMU_QEMU_HARDDISK_47760649-09e9-4ed8-8303-e5ee473a8102'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '47760649', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--1fd8de68--da37--5e01--9bf2--5a04fcdcd771-osd--block--1fd8de68--da37--5e01--9bf2--5a04fcdcd771']}})  2026-03-26 05:59:00.987187 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:59:00.987203 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4fa924fa-33d9-43ce-b208-159d6f6ab539', 'scsi-SQEMU_QEMU_HARDDISK_4fa924fa-33d9-43ce-b208-159d6f6ab539'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '4fa924fa', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4fa924fa-33d9-43ce-b208-159d6f6ab539-part16', 'scsi-SQEMU_QEMU_HARDDISK_4fa924fa-33d9-43ce-b208-159d6f6ab539-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4fa924fa-33d9-43ce-b208-159d6f6ab539-part14', 'scsi-SQEMU_QEMU_HARDDISK_4fa924fa-33d9-43ce-b208-159d6f6ab539-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4fa924fa-33d9-43ce-b208-159d6f6ab539-part15', 'scsi-SQEMU_QEMU_HARDDISK_4fa924fa-33d9-43ce-b208-159d6f6ab539-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4fa924fa-33d9-43ce-b208-159d6f6ab539-part1', 'scsi-SQEMU_QEMU_HARDDISK_4fa924fa-33d9-43ce-b208-159d6f6ab539-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-26 05:59:00.987224 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:59:00.987237 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 05:59:00.987262 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-0bNZRn-Q3R8-1VwJ-TdPW-ktYt-RAGS-sXVFlp', 'dm-uuid-CRYPT-LUKS2-958c3d719b3b484b8cbff174ba1f6fac-0bNZRn-Q3R8-1VwJ-TdPW-ktYt-RAGS-sXVFlp'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-26 05:59:01.222944 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:59:01.223063 | orchestrator | 2026-03-26 05:59:01.223079 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-26 05:59:01.223091 | orchestrator | Thursday 26 March 2026 05:59:00 +0000 (0:00:01.509) 0:56:24.638 ******** 2026-03-26 05:59:01.223105 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:59:01.223120 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--b5eee7c3--8883--5bbe--be5a--75726e822543-osd--block--b5eee7c3--8883--5bbe--be5a--75726e822543', 'dm-uuid-LVM-O1aEkSX5V2TgXKGnqX2peNd9dQhi04NAZJyEqlgfRLjtJKN8JwRgDI1ZPO4R3wgt'], 'uuids': ['1d39f6c5-1f6c-4630-99cd-a410ca5e45d8'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'a52ec37c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['ZJyEql-gfRL-jtJK-N8Jw-RgDI-1ZPO-4R3wgt']}}, 'ansible_loop_var': 'item'})  2026-03-26 05:59:01.223132 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7e352b46-e023-45cf-8a88-51cc46240a44', 'scsi-SQEMU_QEMU_HARDDISK_7e352b46-e023-45cf-8a88-51cc46240a44'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '7e352b46', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:59:01.223165 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-eoBjP8-dDdJ-3FQm-pH7P-5B72-c1L3-mABWfX', 'scsi-0QEMU_QEMU_HARDDISK_7db5f133-fe7b-42a4-ad57-b076dc1856ab', 'scsi-SQEMU_QEMU_HARDDISK_7db5f133-fe7b-42a4-ad57-b076dc1856ab'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '7db5f133', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--a652979e--9f40--503a--bbc8--6de5e605991e-osd--block--a652979e--9f40--503a--bbc8--6de5e605991e']}}, 'ansible_loop_var': 'item'})  2026-03-26 05:59:01.223211 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:59:01.223223 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:59:01.223234 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-26-01-38-09-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:59:01.223244 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:59:01.223262 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-2h2U7S-LeR6-PGwr-u1xY-81U9-rrCs-8siESG', 'dm-uuid-CRYPT-LUKS2-741ece0a80b8415aa2e2dcc695db5f53-2h2U7S-LeR6-PGwr-u1xY-81U9-rrCs-8siESG'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:59:01.223272 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:59:01.223288 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:59:01.223306 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--a652979e--9f40--503a--bbc8--6de5e605991e-osd--block--a652979e--9f40--503a--bbc8--6de5e605991e', 'dm-uuid-LVM-86WEu6duX2Pejl3asW6viK3fsh4aqvqg2h2U7SLeR6PGwru1xY81U9rrCs8siESG'], 'uuids': ['741ece0a-80b8-415a-a2e2-dcc695db5f53'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '7db5f133', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['2h2U7S-LeR6-PGwr-u1xY-81U9-rrCs-8siESG']}}, 'ansible_loop_var': 'item'})  2026-03-26 05:59:01.287917 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--1fd8de68--da37--5e01--9bf2--5a04fcdcd771-osd--block--1fd8de68--da37--5e01--9bf2--5a04fcdcd771', 'dm-uuid-LVM-Q7trkX6T9bQrenPM1EuezeEWG2QB7ffx0bNZRnQ3R81VwJTdPWktYtRAGSsXVFlp'], 'uuids': ['958c3d71-9b3b-484b-8cbf-f174ba1f6fac'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '47760649', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['0bNZRn-Q3R8-1VwJ-TdPW-ktYt-RAGS-sXVFlp']}}, 'ansible_loop_var': 'item'})  2026-03-26 05:59:01.288019 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-Oy69b4-OcVV-F2KD-vi5G-C8ns-n3Cu-1PhYTB', 'scsi-0QEMU_QEMU_HARDDISK_a52ec37c-b4ea-4f83-9b16-3c0f6ce85263', 'scsi-SQEMU_QEMU_HARDDISK_a52ec37c-b4ea-4f83-9b16-3c0f6ce85263'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'a52ec37c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--b5eee7c3--8883--5bbe--be5a--75726e822543-osd--block--b5eee7c3--8883--5bbe--be5a--75726e822543']}}, 'ansible_loop_var': 'item'})  2026-03-26 05:59:01.288060 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8ddd7966-84e6-4951-8a08-7b4fb4af2bd2', 'scsi-SQEMU_QEMU_HARDDISK_8ddd7966-84e6-4951-8a08-7b4fb4af2bd2'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '8ddd7966', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:59:01.288089 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:59:01.288120 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-FriUOI-gUEr-kmP0-nYC7-MoO0-ng3W-Ej90o7', 'scsi-0QEMU_QEMU_HARDDISK_943c088c-5b56-4173-ab64-ec81e1cc816d', 'scsi-SQEMU_QEMU_HARDDISK_943c088c-5b56-4173-ab64-ec81e1cc816d'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '943c088c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--83c4def8--4703--5f7c--9549--7666ff9f2b66-osd--block--83c4def8--4703--5f7c--9549--7666ff9f2b66']}}, 'ansible_loop_var': 'item'})  2026-03-26 05:59:01.288136 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48d73a84-835d-480a-92c3-3edf7ed142ea', 'scsi-SQEMU_QEMU_HARDDISK_48d73a84-835d-480a-92c3-3edf7ed142ea'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '48d73a84', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48d73a84-835d-480a-92c3-3edf7ed142ea-part16', 'scsi-SQEMU_QEMU_HARDDISK_48d73a84-835d-480a-92c3-3edf7ed142ea-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48d73a84-835d-480a-92c3-3edf7ed142ea-part14', 'scsi-SQEMU_QEMU_HARDDISK_48d73a84-835d-480a-92c3-3edf7ed142ea-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48d73a84-835d-480a-92c3-3edf7ed142ea-part15', 'scsi-SQEMU_QEMU_HARDDISK_48d73a84-835d-480a-92c3-3edf7ed142ea-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48d73a84-835d-480a-92c3-3edf7ed142ea-part1', 'scsi-SQEMU_QEMU_HARDDISK_48d73a84-835d-480a-92c3-3edf7ed142ea-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:59:01.288163 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:59:01.288176 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:59:01.288198 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:59:01.389362 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:59:01.389486 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-26-01-38-15-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:59:01.389502 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ZJyEql-gfRL-jtJK-N8Jw-RgDI-1ZPO-4R3wgt', 'dm-uuid-CRYPT-LUKS2-1d39f6c51f6c463099cda410ca5e45d8-ZJyEql-gfRL-jtJK-N8Jw-RgDI-1ZPO-4R3wgt'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:59:01.389512 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:59:01.389523 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:59:01.389549 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-A6zrFI-A863-mhHt-Ip5p-UFeD-Hxho-mhuceD', 'dm-uuid-CRYPT-LUKS2-4b88786507c84424981e8c33baf61cbe-A6zrFI-A863-mhHt-Ip5p-UFeD-Hxho-mhuceD'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:59:01.389576 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:59:01.389588 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--83c4def8--4703--5f7c--9549--7666ff9f2b66-osd--block--83c4def8--4703--5f7c--9549--7666ff9f2b66', 'dm-uuid-LVM-DoNgv1c108dy4eu1pvS7TOCWbuA3UXv0A6zrFIA863mhHtIp5pUFeDHxhomhuceD'], 'uuids': ['4b887865-07c8-4424-981e-8c33baf61cbe'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '943c088c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['A6zrFI-A863-mhHt-Ip5p-UFeD-Hxho-mhuceD']}}, 'ansible_loop_var': 'item'})  2026-03-26 05:59:01.389646 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-xgZSV6-0wfE-zGZo-XmXe-xuiN-RWM0-U4VPgB', 'scsi-0QEMU_QEMU_HARDDISK_47760649-09e9-4ed8-8303-e5ee473a8102', 'scsi-SQEMU_QEMU_HARDDISK_47760649-09e9-4ed8-8303-e5ee473a8102'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '47760649', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--1fd8de68--da37--5e01--9bf2--5a04fcdcd771-osd--block--1fd8de68--da37--5e01--9bf2--5a04fcdcd771']}}, 'ansible_loop_var': 'item'})  2026-03-26 05:59:01.389660 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:59:01.389685 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4fa924fa-33d9-43ce-b208-159d6f6ab539', 'scsi-SQEMU_QEMU_HARDDISK_4fa924fa-33d9-43ce-b208-159d6f6ab539'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '4fa924fa', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4fa924fa-33d9-43ce-b208-159d6f6ab539-part16', 'scsi-SQEMU_QEMU_HARDDISK_4fa924fa-33d9-43ce-b208-159d6f6ab539-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4fa924fa-33d9-43ce-b208-159d6f6ab539-part14', 'scsi-SQEMU_QEMU_HARDDISK_4fa924fa-33d9-43ce-b208-159d6f6ab539-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4fa924fa-33d9-43ce-b208-159d6f6ab539-part15', 'scsi-SQEMU_QEMU_HARDDISK_4fa924fa-33d9-43ce-b208-159d6f6ab539-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4fa924fa-33d9-43ce-b208-159d6f6ab539-part1', 'scsi-SQEMU_QEMU_HARDDISK_4fa924fa-33d9-43ce-b208-159d6f6ab539-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:59:31.433201 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:59:31.433333 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:59:31.433360 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-0bNZRn-Q3R8-1VwJ-TdPW-ktYt-RAGS-sXVFlp', 'dm-uuid-CRYPT-LUKS2-958c3d719b3b484b8cbff174ba1f6fac-0bNZRn-Q3R8-1VwJ-TdPW-ktYt-RAGS-sXVFlp'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 05:59:31.433401 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:59:31.433422 | orchestrator | 2026-03-26 05:59:31.433440 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-26 05:59:31.433458 | orchestrator | Thursday 26 March 2026 05:59:02 +0000 (0:00:01.559) 0:56:26.197 ******** 2026-03-26 05:59:31.433476 | orchestrator | ok: [testbed-node-4] 2026-03-26 05:59:31.433494 | orchestrator | ok: [testbed-node-5] 2026-03-26 05:59:31.433511 | orchestrator | 2026-03-26 05:59:31.433528 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-26 05:59:31.433545 | orchestrator | Thursday 26 March 2026 05:59:04 +0000 (0:00:01.668) 0:56:27.866 ******** 2026-03-26 05:59:31.433563 | orchestrator | ok: [testbed-node-4] 2026-03-26 05:59:31.433579 | orchestrator | ok: [testbed-node-5] 2026-03-26 05:59:31.433595 | orchestrator | 2026-03-26 05:59:31.433686 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-26 05:59:31.433705 | orchestrator | Thursday 26 March 2026 05:59:05 +0000 (0:00:01.240) 0:56:29.107 ******** 2026-03-26 05:59:31.433721 | orchestrator | ok: [testbed-node-4] 2026-03-26 05:59:31.433739 | orchestrator | ok: [testbed-node-5] 2026-03-26 05:59:31.433757 | orchestrator | 2026-03-26 05:59:31.433777 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-26 05:59:31.433795 | orchestrator | Thursday 26 March 2026 05:59:07 +0000 (0:00:01.618) 0:56:30.726 ******** 2026-03-26 05:59:31.433848 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:59:31.433866 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:59:31.433877 | orchestrator | 2026-03-26 05:59:31.433889 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-26 05:59:31.433900 | orchestrator | Thursday 26 March 2026 05:59:08 +0000 (0:00:01.246) 0:56:31.972 ******** 2026-03-26 05:59:31.433911 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:59:31.433922 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:59:31.433932 | orchestrator | 2026-03-26 05:59:31.433943 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-26 05:59:31.433954 | orchestrator | Thursday 26 March 2026 05:59:09 +0000 (0:00:01.364) 0:56:33.336 ******** 2026-03-26 05:59:31.433965 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:59:31.433975 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:59:31.433986 | orchestrator | 2026-03-26 05:59:31.433996 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-26 05:59:31.434007 | orchestrator | Thursday 26 March 2026 05:59:10 +0000 (0:00:01.295) 0:56:34.632 ******** 2026-03-26 05:59:31.434078 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-03-26 05:59:31.434090 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-03-26 05:59:31.434101 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-03-26 05:59:31.434111 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-03-26 05:59:31.434121 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-03-26 05:59:31.434131 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-03-26 05:59:31.434140 | orchestrator | 2026-03-26 05:59:31.434150 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-26 05:59:31.434159 | orchestrator | Thursday 26 March 2026 05:59:13 +0000 (0:00:02.267) 0:56:36.899 ******** 2026-03-26 05:59:31.434190 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-26 05:59:31.434201 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-26 05:59:31.434210 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-26 05:59:31.434220 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:59:31.434229 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-26 05:59:31.434239 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-26 05:59:31.434248 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-26 05:59:31.434258 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:59:31.434267 | orchestrator | 2026-03-26 05:59:31.434277 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-26 05:59:31.434286 | orchestrator | Thursday 26 March 2026 05:59:14 +0000 (0:00:01.597) 0:56:38.497 ******** 2026-03-26 05:59:31.434297 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-4, testbed-node-5 2026-03-26 05:59:31.434308 | orchestrator | 2026-03-26 05:59:31.434367 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-26 05:59:31.434378 | orchestrator | Thursday 26 March 2026 05:59:16 +0000 (0:00:01.323) 0:56:39.821 ******** 2026-03-26 05:59:31.434388 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:59:31.434397 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:59:31.434407 | orchestrator | 2026-03-26 05:59:31.434416 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-26 05:59:31.434426 | orchestrator | Thursday 26 March 2026 05:59:17 +0000 (0:00:01.335) 0:56:41.157 ******** 2026-03-26 05:59:31.434435 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:59:31.434445 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:59:31.434454 | orchestrator | 2026-03-26 05:59:31.434463 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-26 05:59:31.434473 | orchestrator | Thursday 26 March 2026 05:59:18 +0000 (0:00:01.262) 0:56:42.419 ******** 2026-03-26 05:59:31.434482 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:59:31.434501 | orchestrator | skipping: [testbed-node-5] 2026-03-26 05:59:31.434511 | orchestrator | 2026-03-26 05:59:31.434520 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-26 05:59:31.434530 | orchestrator | Thursday 26 March 2026 05:59:19 +0000 (0:00:01.224) 0:56:43.644 ******** 2026-03-26 05:59:31.434539 | orchestrator | ok: [testbed-node-4] 2026-03-26 05:59:31.434549 | orchestrator | ok: [testbed-node-5] 2026-03-26 05:59:31.434558 | orchestrator | 2026-03-26 05:59:31.434567 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-26 05:59:31.434577 | orchestrator | Thursday 26 March 2026 05:59:21 +0000 (0:00:01.347) 0:56:44.991 ******** 2026-03-26 05:59:31.434586 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-26 05:59:31.434630 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-26 05:59:31.434641 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-26 05:59:31.434651 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:59:31.434660 | orchestrator | 2026-03-26 05:59:31.434670 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-26 05:59:31.434679 | orchestrator | Thursday 26 March 2026 05:59:23 +0000 (0:00:01.796) 0:56:46.788 ******** 2026-03-26 05:59:31.434688 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-26 05:59:31.434698 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-26 05:59:31.434707 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-26 05:59:31.434717 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:59:31.434726 | orchestrator | 2026-03-26 05:59:31.434736 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-26 05:59:31.434745 | orchestrator | Thursday 26 March 2026 05:59:24 +0000 (0:00:01.420) 0:56:48.208 ******** 2026-03-26 05:59:31.434755 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-26 05:59:31.434765 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-26 05:59:31.434774 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-26 05:59:31.434784 | orchestrator | skipping: [testbed-node-4] 2026-03-26 05:59:31.434793 | orchestrator | 2026-03-26 05:59:31.434803 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-26 05:59:31.434812 | orchestrator | Thursday 26 March 2026 05:59:25 +0000 (0:00:01.421) 0:56:49.630 ******** 2026-03-26 05:59:31.434822 | orchestrator | ok: [testbed-node-4] 2026-03-26 05:59:31.434831 | orchestrator | ok: [testbed-node-5] 2026-03-26 05:59:31.434841 | orchestrator | 2026-03-26 05:59:31.434850 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-26 05:59:31.434860 | orchestrator | Thursday 26 March 2026 05:59:27 +0000 (0:00:01.300) 0:56:50.931 ******** 2026-03-26 05:59:31.434869 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-26 05:59:31.434879 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-26 05:59:31.434888 | orchestrator | 2026-03-26 05:59:31.434898 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-26 05:59:31.434907 | orchestrator | Thursday 26 March 2026 05:59:29 +0000 (0:00:01.936) 0:56:52.868 ******** 2026-03-26 05:59:31.434917 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-26 05:59:31.434926 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-26 05:59:31.434936 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-26 05:59:31.434945 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-26 05:59:31.434955 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-03-26 05:59:31.434964 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-26 05:59:31.434981 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-26 06:00:16.329456 | orchestrator | 2026-03-26 06:00:16.329568 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-26 06:00:16.329579 | orchestrator | Thursday 26 March 2026 05:59:31 +0000 (0:00:02.203) 0:56:55.071 ******** 2026-03-26 06:00:16.329586 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-26 06:00:16.329593 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-26 06:00:16.329659 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-26 06:00:16.329670 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-26 06:00:16.329677 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-03-26 06:00:16.329684 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-26 06:00:16.329690 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-26 06:00:16.329696 | orchestrator | 2026-03-26 06:00:16.329703 | orchestrator | TASK [Prevent restarts from the packaging] ************************************* 2026-03-26 06:00:16.329709 | orchestrator | Thursday 26 March 2026 05:59:34 +0000 (0:00:02.707) 0:56:57.778 ******** 2026-03-26 06:00:16.329715 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:00:16.329722 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:00:16.329728 | orchestrator | 2026-03-26 06:00:16.329734 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-26 06:00:16.329740 | orchestrator | Thursday 26 March 2026 05:59:35 +0000 (0:00:01.214) 0:56:58.993 ******** 2026-03-26 06:00:16.329746 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-4, testbed-node-5 2026-03-26 06:00:16.329753 | orchestrator | 2026-03-26 06:00:16.329759 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-26 06:00:16.329765 | orchestrator | Thursday 26 March 2026 05:59:36 +0000 (0:00:01.540) 0:57:00.533 ******** 2026-03-26 06:00:16.329771 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-4, testbed-node-5 2026-03-26 06:00:16.329777 | orchestrator | 2026-03-26 06:00:16.329783 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-26 06:00:16.329789 | orchestrator | Thursday 26 March 2026 05:59:38 +0000 (0:00:01.267) 0:57:01.800 ******** 2026-03-26 06:00:16.329795 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:00:16.329801 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:00:16.329807 | orchestrator | 2026-03-26 06:00:16.329814 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-26 06:00:16.329820 | orchestrator | Thursday 26 March 2026 05:59:39 +0000 (0:00:01.300) 0:57:03.101 ******** 2026-03-26 06:00:16.329837 | orchestrator | ok: [testbed-node-4] 2026-03-26 06:00:16.329844 | orchestrator | ok: [testbed-node-5] 2026-03-26 06:00:16.329851 | orchestrator | 2026-03-26 06:00:16.329858 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-26 06:00:16.329865 | orchestrator | Thursday 26 March 2026 05:59:41 +0000 (0:00:01.581) 0:57:04.682 ******** 2026-03-26 06:00:16.329872 | orchestrator | ok: [testbed-node-4] 2026-03-26 06:00:16.329879 | orchestrator | ok: [testbed-node-5] 2026-03-26 06:00:16.329886 | orchestrator | 2026-03-26 06:00:16.329893 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-26 06:00:16.329900 | orchestrator | Thursday 26 March 2026 05:59:42 +0000 (0:00:01.705) 0:57:06.387 ******** 2026-03-26 06:00:16.329907 | orchestrator | ok: [testbed-node-4] 2026-03-26 06:00:16.329914 | orchestrator | ok: [testbed-node-5] 2026-03-26 06:00:16.329921 | orchestrator | 2026-03-26 06:00:16.329929 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-26 06:00:16.329936 | orchestrator | Thursday 26 March 2026 05:59:44 +0000 (0:00:01.631) 0:57:08.019 ******** 2026-03-26 06:00:16.329943 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:00:16.329950 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:00:16.329964 | orchestrator | 2026-03-26 06:00:16.329971 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-26 06:00:16.329978 | orchestrator | Thursday 26 March 2026 05:59:45 +0000 (0:00:01.243) 0:57:09.262 ******** 2026-03-26 06:00:16.329985 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:00:16.329992 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:00:16.329999 | orchestrator | 2026-03-26 06:00:16.330006 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-26 06:00:16.330063 | orchestrator | Thursday 26 March 2026 05:59:46 +0000 (0:00:01.247) 0:57:10.510 ******** 2026-03-26 06:00:16.330074 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:00:16.330083 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:00:16.330091 | orchestrator | 2026-03-26 06:00:16.330100 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-26 06:00:16.330109 | orchestrator | Thursday 26 March 2026 05:59:48 +0000 (0:00:01.277) 0:57:11.787 ******** 2026-03-26 06:00:16.330117 | orchestrator | ok: [testbed-node-4] 2026-03-26 06:00:16.330125 | orchestrator | ok: [testbed-node-5] 2026-03-26 06:00:16.330133 | orchestrator | 2026-03-26 06:00:16.330141 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-26 06:00:16.330149 | orchestrator | Thursday 26 March 2026 05:59:49 +0000 (0:00:01.612) 0:57:13.400 ******** 2026-03-26 06:00:16.330158 | orchestrator | ok: [testbed-node-4] 2026-03-26 06:00:16.330166 | orchestrator | ok: [testbed-node-5] 2026-03-26 06:00:16.330174 | orchestrator | 2026-03-26 06:00:16.330182 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-26 06:00:16.330191 | orchestrator | Thursday 26 March 2026 05:59:51 +0000 (0:00:01.638) 0:57:15.039 ******** 2026-03-26 06:00:16.330199 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:00:16.330207 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:00:16.330216 | orchestrator | 2026-03-26 06:00:16.330224 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-26 06:00:16.330232 | orchestrator | Thursday 26 March 2026 05:59:53 +0000 (0:00:01.646) 0:57:16.685 ******** 2026-03-26 06:00:16.330241 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:00:16.330265 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:00:16.330274 | orchestrator | 2026-03-26 06:00:16.330282 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-26 06:00:16.330290 | orchestrator | Thursday 26 March 2026 05:59:54 +0000 (0:00:01.256) 0:57:17.942 ******** 2026-03-26 06:00:16.330299 | orchestrator | ok: [testbed-node-4] 2026-03-26 06:00:16.330307 | orchestrator | ok: [testbed-node-5] 2026-03-26 06:00:16.330315 | orchestrator | 2026-03-26 06:00:16.330323 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-26 06:00:16.330332 | orchestrator | Thursday 26 March 2026 05:59:55 +0000 (0:00:01.324) 0:57:19.266 ******** 2026-03-26 06:00:16.330340 | orchestrator | ok: [testbed-node-4] 2026-03-26 06:00:16.330348 | orchestrator | ok: [testbed-node-5] 2026-03-26 06:00:16.330356 | orchestrator | 2026-03-26 06:00:16.330364 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-26 06:00:16.330373 | orchestrator | Thursday 26 March 2026 05:59:56 +0000 (0:00:01.264) 0:57:20.531 ******** 2026-03-26 06:00:16.330381 | orchestrator | ok: [testbed-node-4] 2026-03-26 06:00:16.330390 | orchestrator | ok: [testbed-node-5] 2026-03-26 06:00:16.330398 | orchestrator | 2026-03-26 06:00:16.330406 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-26 06:00:16.330413 | orchestrator | Thursday 26 March 2026 05:59:58 +0000 (0:00:01.272) 0:57:21.804 ******** 2026-03-26 06:00:16.330420 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:00:16.330427 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:00:16.330437 | orchestrator | 2026-03-26 06:00:16.330447 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-26 06:00:16.330458 | orchestrator | Thursday 26 March 2026 05:59:59 +0000 (0:00:01.682) 0:57:23.486 ******** 2026-03-26 06:00:16.330470 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:00:16.330489 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:00:16.330502 | orchestrator | 2026-03-26 06:00:16.330509 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-26 06:00:16.330517 | orchestrator | Thursday 26 March 2026 06:00:01 +0000 (0:00:01.238) 0:57:24.725 ******** 2026-03-26 06:00:16.330524 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:00:16.330531 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:00:16.330538 | orchestrator | 2026-03-26 06:00:16.330545 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-26 06:00:16.330552 | orchestrator | Thursday 26 March 2026 06:00:02 +0000 (0:00:01.668) 0:57:26.395 ******** 2026-03-26 06:00:16.330559 | orchestrator | ok: [testbed-node-4] 2026-03-26 06:00:16.330567 | orchestrator | ok: [testbed-node-5] 2026-03-26 06:00:16.330574 | orchestrator | 2026-03-26 06:00:16.330581 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-26 06:00:16.330588 | orchestrator | Thursday 26 March 2026 06:00:03 +0000 (0:00:01.243) 0:57:27.638 ******** 2026-03-26 06:00:16.330617 | orchestrator | ok: [testbed-node-4] 2026-03-26 06:00:16.330629 | orchestrator | ok: [testbed-node-5] 2026-03-26 06:00:16.330641 | orchestrator | 2026-03-26 06:00:16.330660 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-03-26 06:00:16.330673 | orchestrator | Thursday 26 March 2026 06:00:05 +0000 (0:00:01.346) 0:57:28.985 ******** 2026-03-26 06:00:16.330680 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:00:16.330688 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:00:16.330695 | orchestrator | 2026-03-26 06:00:16.330702 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-03-26 06:00:16.330709 | orchestrator | Thursday 26 March 2026 06:00:06 +0000 (0:00:01.278) 0:57:30.263 ******** 2026-03-26 06:00:16.330716 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:00:16.330723 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:00:16.330730 | orchestrator | 2026-03-26 06:00:16.330737 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-03-26 06:00:16.330744 | orchestrator | Thursday 26 March 2026 06:00:07 +0000 (0:00:01.298) 0:57:31.562 ******** 2026-03-26 06:00:16.330751 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:00:16.330759 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:00:16.330766 | orchestrator | 2026-03-26 06:00:16.330773 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-03-26 06:00:16.330780 | orchestrator | Thursday 26 March 2026 06:00:09 +0000 (0:00:01.164) 0:57:32.726 ******** 2026-03-26 06:00:16.330787 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:00:16.330794 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:00:16.330801 | orchestrator | 2026-03-26 06:00:16.330808 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-03-26 06:00:16.330815 | orchestrator | Thursday 26 March 2026 06:00:10 +0000 (0:00:01.284) 0:57:34.011 ******** 2026-03-26 06:00:16.330822 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:00:16.330829 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:00:16.330836 | orchestrator | 2026-03-26 06:00:16.330843 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-03-26 06:00:16.330850 | orchestrator | Thursday 26 March 2026 06:00:11 +0000 (0:00:01.170) 0:57:35.181 ******** 2026-03-26 06:00:16.330857 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:00:16.330864 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:00:16.330871 | orchestrator | 2026-03-26 06:00:16.330878 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-03-26 06:00:16.330885 | orchestrator | Thursday 26 March 2026 06:00:12 +0000 (0:00:01.203) 0:57:36.385 ******** 2026-03-26 06:00:16.330892 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:00:16.330899 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:00:16.330906 | orchestrator | 2026-03-26 06:00:16.330914 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-03-26 06:00:16.330921 | orchestrator | Thursday 26 March 2026 06:00:13 +0000 (0:00:01.167) 0:57:37.552 ******** 2026-03-26 06:00:16.330933 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:00:16.330940 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:00:16.330947 | orchestrator | 2026-03-26 06:00:16.330954 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-03-26 06:00:16.330961 | orchestrator | Thursday 26 March 2026 06:00:15 +0000 (0:00:01.195) 0:57:38.748 ******** 2026-03-26 06:00:16.330969 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:00:16.330976 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:00:16.330983 | orchestrator | 2026-03-26 06:00:16.330995 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-03-26 06:01:01.330509 | orchestrator | Thursday 26 March 2026 06:00:16 +0000 (0:00:01.229) 0:57:39.978 ******** 2026-03-26 06:01:01.330647 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:01:01.330659 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:01:01.330665 | orchestrator | 2026-03-26 06:01:01.330672 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-03-26 06:01:01.330679 | orchestrator | Thursday 26 March 2026 06:00:17 +0000 (0:00:01.673) 0:57:41.651 ******** 2026-03-26 06:01:01.330686 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:01:01.330692 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:01:01.330698 | orchestrator | 2026-03-26 06:01:01.330705 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-03-26 06:01:01.330711 | orchestrator | Thursday 26 March 2026 06:00:19 +0000 (0:00:01.238) 0:57:42.890 ******** 2026-03-26 06:01:01.330718 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:01:01.330724 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:01:01.330730 | orchestrator | 2026-03-26 06:01:01.330736 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-26 06:01:01.330742 | orchestrator | Thursday 26 March 2026 06:00:20 +0000 (0:00:01.227) 0:57:44.118 ******** 2026-03-26 06:01:01.330749 | orchestrator | ok: [testbed-node-4] 2026-03-26 06:01:01.330756 | orchestrator | ok: [testbed-node-5] 2026-03-26 06:01:01.330762 | orchestrator | 2026-03-26 06:01:01.330768 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-26 06:01:01.330774 | orchestrator | Thursday 26 March 2026 06:00:22 +0000 (0:00:02.049) 0:57:46.167 ******** 2026-03-26 06:01:01.330781 | orchestrator | ok: [testbed-node-4] 2026-03-26 06:01:01.330787 | orchestrator | ok: [testbed-node-5] 2026-03-26 06:01:01.330793 | orchestrator | 2026-03-26 06:01:01.330799 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-26 06:01:01.330805 | orchestrator | Thursday 26 March 2026 06:00:24 +0000 (0:00:02.379) 0:57:48.546 ******** 2026-03-26 06:01:01.330812 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-4, testbed-node-5 2026-03-26 06:01:01.330819 | orchestrator | 2026-03-26 06:01:01.330825 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-26 06:01:01.330831 | orchestrator | Thursday 26 March 2026 06:00:26 +0000 (0:00:01.473) 0:57:50.020 ******** 2026-03-26 06:01:01.330838 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:01:01.330844 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:01:01.330850 | orchestrator | 2026-03-26 06:01:01.330857 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-26 06:01:01.330863 | orchestrator | Thursday 26 March 2026 06:00:27 +0000 (0:00:01.306) 0:57:51.326 ******** 2026-03-26 06:01:01.330879 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:01:01.330885 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:01:01.330899 | orchestrator | 2026-03-26 06:01:01.330919 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-26 06:01:01.330926 | orchestrator | Thursday 26 March 2026 06:00:28 +0000 (0:00:01.248) 0:57:52.575 ******** 2026-03-26 06:01:01.330932 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-26 06:01:01.330938 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-26 06:01:01.330960 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-26 06:01:01.330966 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-26 06:01:01.330972 | orchestrator | 2026-03-26 06:01:01.330978 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-26 06:01:01.330985 | orchestrator | Thursday 26 March 2026 06:00:30 +0000 (0:00:01.880) 0:57:54.455 ******** 2026-03-26 06:01:01.330991 | orchestrator | ok: [testbed-node-4] 2026-03-26 06:01:01.330997 | orchestrator | ok: [testbed-node-5] 2026-03-26 06:01:01.331003 | orchestrator | 2026-03-26 06:01:01.331009 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-26 06:01:01.331016 | orchestrator | Thursday 26 March 2026 06:00:32 +0000 (0:00:01.591) 0:57:56.047 ******** 2026-03-26 06:01:01.331022 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:01:01.331028 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:01:01.331034 | orchestrator | 2026-03-26 06:01:01.331040 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-26 06:01:01.331048 | orchestrator | Thursday 26 March 2026 06:00:33 +0000 (0:00:01.242) 0:57:57.289 ******** 2026-03-26 06:01:01.331055 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:01:01.331062 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:01:01.331070 | orchestrator | 2026-03-26 06:01:01.331077 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-26 06:01:01.331085 | orchestrator | Thursday 26 March 2026 06:00:34 +0000 (0:00:01.351) 0:57:58.641 ******** 2026-03-26 06:01:01.331092 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:01:01.331099 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:01:01.331107 | orchestrator | 2026-03-26 06:01:01.331114 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-26 06:01:01.331121 | orchestrator | Thursday 26 March 2026 06:00:36 +0000 (0:00:01.187) 0:57:59.829 ******** 2026-03-26 06:01:01.331129 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-4, testbed-node-5 2026-03-26 06:01:01.331136 | orchestrator | 2026-03-26 06:01:01.331143 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-26 06:01:01.331151 | orchestrator | Thursday 26 March 2026 06:00:37 +0000 (0:00:01.249) 0:58:01.078 ******** 2026-03-26 06:01:01.331158 | orchestrator | ok: [testbed-node-4] 2026-03-26 06:01:01.331165 | orchestrator | ok: [testbed-node-5] 2026-03-26 06:01:01.331172 | orchestrator | 2026-03-26 06:01:01.331180 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-26 06:01:01.331187 | orchestrator | Thursday 26 March 2026 06:00:39 +0000 (0:00:01.920) 0:58:02.999 ******** 2026-03-26 06:01:01.331194 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-26 06:01:01.331214 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-26 06:01:01.331221 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-26 06:01:01.331229 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:01:01.331236 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-26 06:01:01.331244 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-26 06:01:01.331251 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-26 06:01:01.331258 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:01:01.331265 | orchestrator | 2026-03-26 06:01:01.331272 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-26 06:01:01.331279 | orchestrator | Thursday 26 March 2026 06:00:40 +0000 (0:00:01.223) 0:58:04.222 ******** 2026-03-26 06:01:01.331286 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:01:01.331293 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:01:01.331300 | orchestrator | 2026-03-26 06:01:01.331307 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-26 06:01:01.331319 | orchestrator | Thursday 26 March 2026 06:00:41 +0000 (0:00:01.258) 0:58:05.480 ******** 2026-03-26 06:01:01.331326 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:01:01.331333 | orchestrator | 2026-03-26 06:01:01.331341 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-26 06:01:01.331348 | orchestrator | Thursday 26 March 2026 06:00:43 +0000 (0:00:01.271) 0:58:06.752 ******** 2026-03-26 06:01:01.331355 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:01:01.331362 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:01:01.331370 | orchestrator | 2026-03-26 06:01:01.331377 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-26 06:01:01.331384 | orchestrator | Thursday 26 March 2026 06:00:44 +0000 (0:00:01.239) 0:58:07.992 ******** 2026-03-26 06:01:01.331392 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:01:01.331399 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:01:01.331405 | orchestrator | 2026-03-26 06:01:01.331411 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-26 06:01:01.331417 | orchestrator | Thursday 26 March 2026 06:00:45 +0000 (0:00:01.323) 0:58:09.315 ******** 2026-03-26 06:01:01.331424 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:01:01.331430 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:01:01.331436 | orchestrator | 2026-03-26 06:01:01.331442 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-26 06:01:01.331448 | orchestrator | Thursday 26 March 2026 06:00:46 +0000 (0:00:01.289) 0:58:10.605 ******** 2026-03-26 06:01:01.331454 | orchestrator | ok: [testbed-node-4] 2026-03-26 06:01:01.331460 | orchestrator | ok: [testbed-node-5] 2026-03-26 06:01:01.331466 | orchestrator | 2026-03-26 06:01:01.331476 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-26 06:01:01.331482 | orchestrator | Thursday 26 March 2026 06:00:49 +0000 (0:00:02.705) 0:58:13.310 ******** 2026-03-26 06:01:01.331488 | orchestrator | ok: [testbed-node-4] 2026-03-26 06:01:01.331494 | orchestrator | ok: [testbed-node-5] 2026-03-26 06:01:01.331500 | orchestrator | 2026-03-26 06:01:01.331506 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-26 06:01:01.331513 | orchestrator | Thursday 26 March 2026 06:00:50 +0000 (0:00:01.215) 0:58:14.526 ******** 2026-03-26 06:01:01.331519 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-4, testbed-node-5 2026-03-26 06:01:01.331526 | orchestrator | 2026-03-26 06:01:01.331532 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-26 06:01:01.331538 | orchestrator | Thursday 26 March 2026 06:00:52 +0000 (0:00:01.462) 0:58:15.988 ******** 2026-03-26 06:01:01.331544 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:01:01.331550 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:01:01.331556 | orchestrator | 2026-03-26 06:01:01.331562 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-26 06:01:01.331568 | orchestrator | Thursday 26 March 2026 06:00:53 +0000 (0:00:01.268) 0:58:17.257 ******** 2026-03-26 06:01:01.331574 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:01:01.331580 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:01:01.331622 | orchestrator | 2026-03-26 06:01:01.331629 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-26 06:01:01.331635 | orchestrator | Thursday 26 March 2026 06:00:54 +0000 (0:00:01.286) 0:58:18.543 ******** 2026-03-26 06:01:01.331642 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:01:01.331648 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:01:01.331654 | orchestrator | 2026-03-26 06:01:01.331660 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-26 06:01:01.331666 | orchestrator | Thursday 26 March 2026 06:00:56 +0000 (0:00:01.272) 0:58:19.815 ******** 2026-03-26 06:01:01.331672 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:01:01.331678 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:01:01.331684 | orchestrator | 2026-03-26 06:01:01.331690 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-26 06:01:01.331701 | orchestrator | Thursday 26 March 2026 06:00:57 +0000 (0:00:01.256) 0:58:21.072 ******** 2026-03-26 06:01:01.331708 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:01:01.331714 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:01:01.331720 | orchestrator | 2026-03-26 06:01:01.331726 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-26 06:01:01.331732 | orchestrator | Thursday 26 March 2026 06:00:58 +0000 (0:00:01.263) 0:58:22.336 ******** 2026-03-26 06:01:01.331738 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:01:01.331745 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:01:01.331751 | orchestrator | 2026-03-26 06:01:01.331757 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-26 06:01:01.331763 | orchestrator | Thursday 26 March 2026 06:01:00 +0000 (0:00:01.352) 0:58:23.688 ******** 2026-03-26 06:01:01.331769 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:01:01.331775 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:01:01.331781 | orchestrator | 2026-03-26 06:01:01.331792 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-26 06:01:42.326942 | orchestrator | Thursday 26 March 2026 06:01:01 +0000 (0:00:01.283) 0:58:24.972 ******** 2026-03-26 06:01:42.327095 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:01:42.327112 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:01:42.327124 | orchestrator | 2026-03-26 06:01:42.327136 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-26 06:01:42.327147 | orchestrator | Thursday 26 March 2026 06:01:02 +0000 (0:00:01.310) 0:58:26.283 ******** 2026-03-26 06:01:42.327159 | orchestrator | ok: [testbed-node-4] 2026-03-26 06:01:42.327171 | orchestrator | ok: [testbed-node-5] 2026-03-26 06:01:42.327182 | orchestrator | 2026-03-26 06:01:42.327193 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-26 06:01:42.327204 | orchestrator | Thursday 26 March 2026 06:01:03 +0000 (0:00:01.254) 0:58:27.537 ******** 2026-03-26 06:01:42.327216 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-4, testbed-node-5 2026-03-26 06:01:42.327228 | orchestrator | 2026-03-26 06:01:42.327238 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-26 06:01:42.327249 | orchestrator | Thursday 26 March 2026 06:01:05 +0000 (0:00:01.226) 0:58:28.763 ******** 2026-03-26 06:01:42.327260 | orchestrator | ok: [testbed-node-4] => (item=/etc/ceph) 2026-03-26 06:01:42.327271 | orchestrator | ok: [testbed-node-5] => (item=/etc/ceph) 2026-03-26 06:01:42.327282 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/) 2026-03-26 06:01:42.327293 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/) 2026-03-26 06:01:42.327304 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-03-26 06:01:42.327315 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-03-26 06:01:42.327325 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-03-26 06:01:42.327336 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-03-26 06:01:42.327346 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-03-26 06:01:42.327357 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-03-26 06:01:42.327368 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-03-26 06:01:42.327379 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-03-26 06:01:42.327390 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-03-26 06:01:42.327400 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-03-26 06:01:42.327411 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-03-26 06:01:42.327422 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-03-26 06:01:42.327432 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-26 06:01:42.327443 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-26 06:01:42.327502 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-26 06:01:42.327514 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-26 06:01:42.327525 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-26 06:01:42.327536 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-26 06:01:42.327547 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-26 06:01:42.327557 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-26 06:01:42.327568 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-26 06:01:42.327603 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-26 06:01:42.327615 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-26 06:01:42.327625 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-26 06:01:42.327636 | orchestrator | ok: [testbed-node-4] => (item=/var/run/ceph) 2026-03-26 06:01:42.327647 | orchestrator | ok: [testbed-node-5] => (item=/var/run/ceph) 2026-03-26 06:01:42.327658 | orchestrator | ok: [testbed-node-4] => (item=/var/log/ceph) 2026-03-26 06:01:42.327668 | orchestrator | ok: [testbed-node-5] => (item=/var/log/ceph) 2026-03-26 06:01:42.327679 | orchestrator | 2026-03-26 06:01:42.327690 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-26 06:01:42.327701 | orchestrator | Thursday 26 March 2026 06:01:11 +0000 (0:00:06.635) 0:58:35.399 ******** 2026-03-26 06:01:42.327712 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-4, testbed-node-5 2026-03-26 06:01:42.327722 | orchestrator | 2026-03-26 06:01:42.327733 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-03-26 06:01:42.327744 | orchestrator | Thursday 26 March 2026 06:01:13 +0000 (0:00:01.343) 0:58:36.742 ******** 2026-03-26 06:01:42.327756 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-26 06:01:42.327769 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-26 06:01:42.327780 | orchestrator | 2026-03-26 06:01:42.327791 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-03-26 06:01:42.327801 | orchestrator | Thursday 26 March 2026 06:01:14 +0000 (0:00:01.652) 0:58:38.394 ******** 2026-03-26 06:01:42.327812 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-26 06:01:42.327823 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-26 06:01:42.327833 | orchestrator | 2026-03-26 06:01:42.327844 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-26 06:01:42.327876 | orchestrator | Thursday 26 March 2026 06:01:16 +0000 (0:00:02.002) 0:58:40.397 ******** 2026-03-26 06:01:42.327888 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:01:42.327899 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:01:42.327909 | orchestrator | 2026-03-26 06:01:42.327920 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-26 06:01:42.327931 | orchestrator | Thursday 26 March 2026 06:01:18 +0000 (0:00:01.311) 0:58:41.708 ******** 2026-03-26 06:01:42.327942 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:01:42.327953 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:01:42.327963 | orchestrator | 2026-03-26 06:01:42.327974 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-26 06:01:42.327985 | orchestrator | Thursday 26 March 2026 06:01:19 +0000 (0:00:01.260) 0:58:42.969 ******** 2026-03-26 06:01:42.327996 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:01:42.328007 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:01:42.328017 | orchestrator | 2026-03-26 06:01:42.328037 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-26 06:01:42.328048 | orchestrator | Thursday 26 March 2026 06:01:20 +0000 (0:00:01.598) 0:58:44.567 ******** 2026-03-26 06:01:42.328059 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:01:42.328070 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:01:42.328080 | orchestrator | 2026-03-26 06:01:42.328091 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-26 06:01:42.328101 | orchestrator | Thursday 26 March 2026 06:01:22 +0000 (0:00:01.212) 0:58:45.779 ******** 2026-03-26 06:01:42.328112 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:01:42.328123 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:01:42.328133 | orchestrator | 2026-03-26 06:01:42.328144 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-26 06:01:42.328155 | orchestrator | Thursday 26 March 2026 06:01:23 +0000 (0:00:01.208) 0:58:46.987 ******** 2026-03-26 06:01:42.328166 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:01:42.328176 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:01:42.328187 | orchestrator | 2026-03-26 06:01:42.328198 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-26 06:01:42.328208 | orchestrator | Thursday 26 March 2026 06:01:24 +0000 (0:00:01.235) 0:58:48.222 ******** 2026-03-26 06:01:42.328219 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:01:42.328230 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:01:42.328240 | orchestrator | 2026-03-26 06:01:42.328251 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-26 06:01:42.328262 | orchestrator | Thursday 26 March 2026 06:01:25 +0000 (0:00:01.290) 0:58:49.513 ******** 2026-03-26 06:01:42.328272 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:01:42.328283 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:01:42.328294 | orchestrator | 2026-03-26 06:01:42.328310 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-26 06:01:42.328321 | orchestrator | Thursday 26 March 2026 06:01:27 +0000 (0:00:01.316) 0:58:50.830 ******** 2026-03-26 06:01:42.328332 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:01:42.328342 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:01:42.328353 | orchestrator | 2026-03-26 06:01:42.328364 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-26 06:01:42.328374 | orchestrator | Thursday 26 March 2026 06:01:28 +0000 (0:00:01.315) 0:58:52.146 ******** 2026-03-26 06:01:42.328385 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:01:42.328396 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:01:42.328406 | orchestrator | 2026-03-26 06:01:42.328417 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-26 06:01:42.328428 | orchestrator | Thursday 26 March 2026 06:01:29 +0000 (0:00:01.304) 0:58:53.450 ******** 2026-03-26 06:01:42.328438 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:01:42.328449 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:01:42.328459 | orchestrator | 2026-03-26 06:01:42.328470 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-26 06:01:42.328481 | orchestrator | Thursday 26 March 2026 06:01:31 +0000 (0:00:01.304) 0:58:54.755 ******** 2026-03-26 06:01:42.328491 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] 2026-03-26 06:01:42.328502 | orchestrator | changed: [testbed-node-4 -> testbed-node-2(192.168.16.12)] 2026-03-26 06:01:42.328512 | orchestrator | 2026-03-26 06:01:42.328523 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-26 06:01:42.328533 | orchestrator | Thursday 26 March 2026 06:01:35 +0000 (0:00:04.640) 0:58:59.396 ******** 2026-03-26 06:01:42.328544 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-26 06:01:42.328555 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-26 06:01:42.328573 | orchestrator | 2026-03-26 06:01:42.328608 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-26 06:01:42.328619 | orchestrator | Thursday 26 March 2026 06:01:37 +0000 (0:00:01.357) 0:59:00.753 ******** 2026-03-26 06:01:42.328634 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}]) 2026-03-26 06:01:42.328656 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}]) 2026-03-26 06:02:31.068666 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}]) 2026-03-26 06:02:31.068843 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}]) 2026-03-26 06:02:31.068860 | orchestrator | 2026-03-26 06:02:31.068874 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-26 06:02:31.068887 | orchestrator | Thursday 26 March 2026 06:01:42 +0000 (0:00:05.214) 0:59:05.968 ******** 2026-03-26 06:02:31.068898 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:02:31.068911 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:02:31.068922 | orchestrator | 2026-03-26 06:02:31.068934 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-26 06:02:31.068945 | orchestrator | Thursday 26 March 2026 06:01:43 +0000 (0:00:01.242) 0:59:07.211 ******** 2026-03-26 06:02:31.068956 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:02:31.068967 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:02:31.068977 | orchestrator | 2026-03-26 06:02:31.068989 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-26 06:02:31.069002 | orchestrator | Thursday 26 March 2026 06:01:45 +0000 (0:00:01.606) 0:59:08.817 ******** 2026-03-26 06:02:31.069013 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:02:31.069024 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:02:31.069034 | orchestrator | 2026-03-26 06:02:31.069045 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-26 06:02:31.069056 | orchestrator | Thursday 26 March 2026 06:01:46 +0000 (0:00:01.271) 0:59:10.089 ******** 2026-03-26 06:02:31.069067 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:02:31.069078 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:02:31.069089 | orchestrator | 2026-03-26 06:02:31.069100 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-26 06:02:31.069132 | orchestrator | Thursday 26 March 2026 06:01:47 +0000 (0:00:01.305) 0:59:11.395 ******** 2026-03-26 06:02:31.069148 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:02:31.069162 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:02:31.069174 | orchestrator | 2026-03-26 06:02:31.069187 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-26 06:02:31.069200 | orchestrator | Thursday 26 March 2026 06:01:49 +0000 (0:00:01.288) 0:59:12.684 ******** 2026-03-26 06:02:31.069212 | orchestrator | ok: [testbed-node-4] 2026-03-26 06:02:31.069255 | orchestrator | ok: [testbed-node-5] 2026-03-26 06:02:31.069269 | orchestrator | 2026-03-26 06:02:31.069282 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-26 06:02:31.069295 | orchestrator | Thursday 26 March 2026 06:01:50 +0000 (0:00:01.391) 0:59:14.075 ******** 2026-03-26 06:02:31.069308 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-26 06:02:31.069321 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-26 06:02:31.069334 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-26 06:02:31.069347 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:02:31.069358 | orchestrator | 2026-03-26 06:02:31.069368 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-26 06:02:31.069379 | orchestrator | Thursday 26 March 2026 06:01:51 +0000 (0:00:01.420) 0:59:15.495 ******** 2026-03-26 06:02:31.069390 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-26 06:02:31.069401 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-26 06:02:31.069412 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-26 06:02:31.069422 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:02:31.069433 | orchestrator | 2026-03-26 06:02:31.069444 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-26 06:02:31.069454 | orchestrator | Thursday 26 March 2026 06:01:53 +0000 (0:00:01.426) 0:59:16.922 ******** 2026-03-26 06:02:31.069465 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-26 06:02:31.069476 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-26 06:02:31.069486 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-26 06:02:31.069497 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:02:31.069508 | orchestrator | 2026-03-26 06:02:31.069519 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-26 06:02:31.069529 | orchestrator | Thursday 26 March 2026 06:01:55 +0000 (0:00:01.794) 0:59:18.717 ******** 2026-03-26 06:02:31.069540 | orchestrator | ok: [testbed-node-4] 2026-03-26 06:02:31.069551 | orchestrator | ok: [testbed-node-5] 2026-03-26 06:02:31.069562 | orchestrator | 2026-03-26 06:02:31.069592 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-26 06:02:31.069604 | orchestrator | Thursday 26 March 2026 06:01:56 +0000 (0:00:01.338) 0:59:20.055 ******** 2026-03-26 06:02:31.069615 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-26 06:02:31.069626 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-26 06:02:31.069636 | orchestrator | 2026-03-26 06:02:31.069647 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-26 06:02:31.069658 | orchestrator | Thursday 26 March 2026 06:01:57 +0000 (0:00:01.506) 0:59:21.562 ******** 2026-03-26 06:02:31.069669 | orchestrator | ok: [testbed-node-4] 2026-03-26 06:02:31.069679 | orchestrator | ok: [testbed-node-5] 2026-03-26 06:02:31.069690 | orchestrator | 2026-03-26 06:02:31.069721 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-03-26 06:02:31.069733 | orchestrator | Thursday 26 March 2026 06:01:59 +0000 (0:00:01.889) 0:59:23.451 ******** 2026-03-26 06:02:31.069744 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:02:31.069755 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:02:31.069766 | orchestrator | 2026-03-26 06:02:31.069776 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-03-26 06:02:31.069787 | orchestrator | Thursday 26 March 2026 06:02:01 +0000 (0:00:01.232) 0:59:24.684 ******** 2026-03-26 06:02:31.069798 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-4, testbed-node-5 2026-03-26 06:02:31.069810 | orchestrator | 2026-03-26 06:02:31.069821 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-03-26 06:02:31.069831 | orchestrator | Thursday 26 March 2026 06:02:02 +0000 (0:00:01.436) 0:59:26.121 ******** 2026-03-26 06:02:31.069842 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-26 06:02:31.069863 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-26 06:02:31.069874 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-03-26 06:02:31.069884 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-03-26 06:02:31.069895 | orchestrator | 2026-03-26 06:02:31.069906 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-03-26 06:02:31.069917 | orchestrator | Thursday 26 March 2026 06:02:04 +0000 (0:00:01.980) 0:59:28.102 ******** 2026-03-26 06:02:31.069927 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-26 06:02:31.069938 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-26 06:02:31.069949 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-26 06:02:31.069959 | orchestrator | 2026-03-26 06:02:31.069970 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-03-26 06:02:31.069981 | orchestrator | Thursday 26 March 2026 06:02:07 +0000 (0:00:03.222) 0:59:31.325 ******** 2026-03-26 06:02:31.069991 | orchestrator | ok: [testbed-node-4] => (item=None) 2026-03-26 06:02:31.070002 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-26 06:02:31.070013 | orchestrator | ok: [testbed-node-4] 2026-03-26 06:02:31.070092 | orchestrator | ok: [testbed-node-5] => (item=None) 2026-03-26 06:02:31.070103 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-26 06:02:31.070113 | orchestrator | ok: [testbed-node-5] 2026-03-26 06:02:31.070124 | orchestrator | 2026-03-26 06:02:31.070135 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-03-26 06:02:31.070152 | orchestrator | Thursday 26 March 2026 06:02:09 +0000 (0:00:02.087) 0:59:33.413 ******** 2026-03-26 06:02:31.070163 | orchestrator | ok: [testbed-node-4] 2026-03-26 06:02:31.070174 | orchestrator | ok: [testbed-node-5] 2026-03-26 06:02:31.070185 | orchestrator | 2026-03-26 06:02:31.070195 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-03-26 06:02:31.070206 | orchestrator | Thursday 26 March 2026 06:02:11 +0000 (0:00:01.583) 0:59:34.996 ******** 2026-03-26 06:02:31.070216 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:02:31.070227 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:02:31.070238 | orchestrator | 2026-03-26 06:02:31.070248 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-03-26 06:02:31.070259 | orchestrator | Thursday 26 March 2026 06:02:12 +0000 (0:00:01.227) 0:59:36.224 ******** 2026-03-26 06:02:31.070270 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-4, testbed-node-5 2026-03-26 06:02:31.070281 | orchestrator | 2026-03-26 06:02:31.070291 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-03-26 06:02:31.070302 | orchestrator | Thursday 26 March 2026 06:02:14 +0000 (0:00:01.570) 0:59:37.794 ******** 2026-03-26 06:02:31.070313 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-4, testbed-node-5 2026-03-26 06:02:31.070323 | orchestrator | 2026-03-26 06:02:31.070334 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-03-26 06:02:31.070345 | orchestrator | Thursday 26 March 2026 06:02:15 +0000 (0:00:01.442) 0:59:39.237 ******** 2026-03-26 06:02:31.070356 | orchestrator | ok: [testbed-node-4] 2026-03-26 06:02:31.070366 | orchestrator | ok: [testbed-node-5] 2026-03-26 06:02:31.070377 | orchestrator | 2026-03-26 06:02:31.070387 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-03-26 06:02:31.070398 | orchestrator | Thursday 26 March 2026 06:02:17 +0000 (0:00:02.253) 0:59:41.491 ******** 2026-03-26 06:02:31.070408 | orchestrator | ok: [testbed-node-4] 2026-03-26 06:02:31.070419 | orchestrator | ok: [testbed-node-5] 2026-03-26 06:02:31.070429 | orchestrator | 2026-03-26 06:02:31.070440 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-03-26 06:02:31.070451 | orchestrator | Thursday 26 March 2026 06:02:19 +0000 (0:00:02.075) 0:59:43.567 ******** 2026-03-26 06:02:31.070461 | orchestrator | ok: [testbed-node-4] 2026-03-26 06:02:31.070479 | orchestrator | ok: [testbed-node-5] 2026-03-26 06:02:31.070490 | orchestrator | 2026-03-26 06:02:31.070501 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-03-26 06:02:31.070511 | orchestrator | Thursday 26 March 2026 06:02:22 +0000 (0:00:02.371) 0:59:45.939 ******** 2026-03-26 06:02:31.070522 | orchestrator | changed: [testbed-node-4] 2026-03-26 06:02:31.070533 | orchestrator | changed: [testbed-node-5] 2026-03-26 06:02:31.070543 | orchestrator | 2026-03-26 06:02:31.070554 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-03-26 06:02:31.070565 | orchestrator | Thursday 26 March 2026 06:02:25 +0000 (0:00:03.416) 0:59:49.355 ******** 2026-03-26 06:02:31.070593 | orchestrator | ok: [testbed-node-4] 2026-03-26 06:02:31.070604 | orchestrator | ok: [testbed-node-5] 2026-03-26 06:02:31.070615 | orchestrator | 2026-03-26 06:02:31.070626 | orchestrator | TASK [Set max_mds] ************************************************************* 2026-03-26 06:02:31.070636 | orchestrator | Thursday 26 March 2026 06:02:27 +0000 (0:00:01.849) 0:59:51.205 ******** 2026-03-26 06:02:31.070647 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:02:31.070666 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-26 06:02:53.765904 | orchestrator | 2026-03-26 06:02:53.766100 | orchestrator | PLAY [Upgrade ceph rgws cluster] *********************************************** 2026-03-26 06:02:53.766135 | orchestrator | 2026-03-26 06:02:53.766157 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-26 06:02:53.766177 | orchestrator | Thursday 26 March 2026 06:02:31 +0000 (0:00:03.502) 0:59:54.708 ******** 2026-03-26 06:02:53.766191 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3 2026-03-26 06:02:53.766202 | orchestrator | 2026-03-26 06:02:53.766213 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-26 06:02:53.766224 | orchestrator | Thursday 26 March 2026 06:02:32 +0000 (0:00:01.142) 0:59:55.850 ******** 2026-03-26 06:02:53.766235 | orchestrator | ok: [testbed-node-3] 2026-03-26 06:02:53.766247 | orchestrator | 2026-03-26 06:02:53.766258 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-26 06:02:53.766269 | orchestrator | Thursday 26 March 2026 06:02:33 +0000 (0:00:01.452) 0:59:57.302 ******** 2026-03-26 06:02:53.766280 | orchestrator | ok: [testbed-node-3] 2026-03-26 06:02:53.766291 | orchestrator | 2026-03-26 06:02:53.766302 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-26 06:02:53.766312 | orchestrator | Thursday 26 March 2026 06:02:34 +0000 (0:00:01.123) 0:59:58.426 ******** 2026-03-26 06:02:53.766323 | orchestrator | ok: [testbed-node-3] 2026-03-26 06:02:53.766334 | orchestrator | 2026-03-26 06:02:53.766345 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-26 06:02:53.766355 | orchestrator | Thursday 26 March 2026 06:02:36 +0000 (0:00:01.431) 0:59:59.857 ******** 2026-03-26 06:02:53.766366 | orchestrator | ok: [testbed-node-3] 2026-03-26 06:02:53.766377 | orchestrator | 2026-03-26 06:02:53.766387 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-26 06:02:53.766398 | orchestrator | Thursday 26 March 2026 06:02:37 +0000 (0:00:01.211) 1:00:01.069 ******** 2026-03-26 06:02:53.766409 | orchestrator | ok: [testbed-node-3] 2026-03-26 06:02:53.766420 | orchestrator | 2026-03-26 06:02:53.766432 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-26 06:02:53.766445 | orchestrator | Thursday 26 March 2026 06:02:38 +0000 (0:00:01.122) 1:00:02.192 ******** 2026-03-26 06:02:53.766457 | orchestrator | ok: [testbed-node-3] 2026-03-26 06:02:53.766469 | orchestrator | 2026-03-26 06:02:53.766483 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-26 06:02:53.766496 | orchestrator | Thursday 26 March 2026 06:02:39 +0000 (0:00:01.165) 1:00:03.358 ******** 2026-03-26 06:02:53.766508 | orchestrator | skipping: [testbed-node-3] 2026-03-26 06:02:53.766520 | orchestrator | 2026-03-26 06:02:53.766550 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-26 06:02:53.766563 | orchestrator | Thursday 26 March 2026 06:02:40 +0000 (0:00:01.145) 1:00:04.503 ******** 2026-03-26 06:02:53.766631 | orchestrator | ok: [testbed-node-3] 2026-03-26 06:02:53.766645 | orchestrator | 2026-03-26 06:02:53.766659 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-26 06:02:53.766672 | orchestrator | Thursday 26 March 2026 06:02:42 +0000 (0:00:01.245) 1:00:05.749 ******** 2026-03-26 06:02:53.766685 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-26 06:02:53.766697 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-26 06:02:53.766710 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-26 06:02:53.766723 | orchestrator | 2026-03-26 06:02:53.766736 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-26 06:02:53.766749 | orchestrator | Thursday 26 March 2026 06:02:43 +0000 (0:00:01.699) 1:00:07.448 ******** 2026-03-26 06:02:53.766761 | orchestrator | ok: [testbed-node-3] 2026-03-26 06:02:53.766772 | orchestrator | 2026-03-26 06:02:53.766783 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-26 06:02:53.766794 | orchestrator | Thursday 26 March 2026 06:02:45 +0000 (0:00:01.259) 1:00:08.707 ******** 2026-03-26 06:02:53.766804 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-26 06:02:53.766815 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-26 06:02:53.766826 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-26 06:02:53.766836 | orchestrator | 2026-03-26 06:02:53.766847 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-26 06:02:53.766858 | orchestrator | Thursday 26 March 2026 06:02:47 +0000 (0:00:02.888) 1:00:11.596 ******** 2026-03-26 06:02:53.766868 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-26 06:02:53.766880 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-26 06:02:53.766890 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-26 06:02:53.766901 | orchestrator | skipping: [testbed-node-3] 2026-03-26 06:02:53.766912 | orchestrator | 2026-03-26 06:02:53.766922 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-26 06:02:53.766933 | orchestrator | Thursday 26 March 2026 06:02:49 +0000 (0:00:01.503) 1:00:13.099 ******** 2026-03-26 06:02:53.766945 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-26 06:02:53.766959 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-26 06:02:53.766992 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-26 06:02:53.767004 | orchestrator | skipping: [testbed-node-3] 2026-03-26 06:02:53.767015 | orchestrator | 2026-03-26 06:02:53.767026 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-26 06:02:53.767036 | orchestrator | Thursday 26 March 2026 06:02:51 +0000 (0:00:01.923) 1:00:15.022 ******** 2026-03-26 06:02:53.767050 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-26 06:02:53.767073 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-26 06:02:53.767085 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-26 06:02:53.767096 | orchestrator | skipping: [testbed-node-3] 2026-03-26 06:02:53.767107 | orchestrator | 2026-03-26 06:02:53.767118 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-26 06:02:53.767134 | orchestrator | Thursday 26 March 2026 06:02:52 +0000 (0:00:01.212) 1:00:16.235 ******** 2026-03-26 06:02:53.767148 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'de9c3b4c4c57', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-26 06:02:45.573480', 'end': '2026-03-26 06:02:45.621529', 'delta': '0:00:00.048049', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['de9c3b4c4c57'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-26 06:02:53.767164 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'd66b87272f8e', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-26 06:02:46.135117', 'end': '2026-03-26 06:02:46.183458', 'delta': '0:00:00.048341', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['d66b87272f8e'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-26 06:02:53.767176 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'b850f8fd4697', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-26 06:02:46.702167', 'end': '2026-03-26 06:02:46.750702', 'delta': '0:00:00.048535', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['b850f8fd4697'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-26 06:02:53.767187 | orchestrator | 2026-03-26 06:02:53.767206 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-26 06:03:11.809104 | orchestrator | Thursday 26 March 2026 06:02:53 +0000 (0:00:01.174) 1:00:17.410 ******** 2026-03-26 06:03:11.809221 | orchestrator | ok: [testbed-node-3] 2026-03-26 06:03:11.809239 | orchestrator | 2026-03-26 06:03:11.809252 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-26 06:03:11.809263 | orchestrator | Thursday 26 March 2026 06:02:55 +0000 (0:00:01.248) 1:00:18.659 ******** 2026-03-26 06:03:11.809296 | orchestrator | skipping: [testbed-node-3] 2026-03-26 06:03:11.809308 | orchestrator | 2026-03-26 06:03:11.809320 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-26 06:03:11.809330 | orchestrator | Thursday 26 March 2026 06:02:56 +0000 (0:00:01.697) 1:00:20.357 ******** 2026-03-26 06:03:11.809341 | orchestrator | ok: [testbed-node-3] 2026-03-26 06:03:11.809352 | orchestrator | 2026-03-26 06:03:11.809363 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-26 06:03:11.809374 | orchestrator | Thursday 26 March 2026 06:02:57 +0000 (0:00:01.188) 1:00:21.546 ******** 2026-03-26 06:03:11.809385 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-26 06:03:11.809396 | orchestrator | 2026-03-26 06:03:11.809407 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-26 06:03:11.809419 | orchestrator | Thursday 26 March 2026 06:02:59 +0000 (0:00:01.989) 1:00:23.535 ******** 2026-03-26 06:03:11.809429 | orchestrator | ok: [testbed-node-3] 2026-03-26 06:03:11.809440 | orchestrator | 2026-03-26 06:03:11.809451 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-26 06:03:11.809461 | orchestrator | Thursday 26 March 2026 06:03:01 +0000 (0:00:01.183) 1:00:24.718 ******** 2026-03-26 06:03:11.809472 | orchestrator | skipping: [testbed-node-3] 2026-03-26 06:03:11.809483 | orchestrator | 2026-03-26 06:03:11.809494 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-26 06:03:11.809504 | orchestrator | Thursday 26 March 2026 06:03:02 +0000 (0:00:01.161) 1:00:25.880 ******** 2026-03-26 06:03:11.809515 | orchestrator | skipping: [testbed-node-3] 2026-03-26 06:03:11.809525 | orchestrator | 2026-03-26 06:03:11.809536 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-26 06:03:11.809547 | orchestrator | Thursday 26 March 2026 06:03:03 +0000 (0:00:01.210) 1:00:27.090 ******** 2026-03-26 06:03:11.809558 | orchestrator | skipping: [testbed-node-3] 2026-03-26 06:03:11.809609 | orchestrator | 2026-03-26 06:03:11.809622 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-26 06:03:11.809632 | orchestrator | Thursday 26 March 2026 06:03:04 +0000 (0:00:01.113) 1:00:28.204 ******** 2026-03-26 06:03:11.809657 | orchestrator | skipping: [testbed-node-3] 2026-03-26 06:03:11.809671 | orchestrator | 2026-03-26 06:03:11.809683 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-26 06:03:11.809695 | orchestrator | Thursday 26 March 2026 06:03:05 +0000 (0:00:01.133) 1:00:29.338 ******** 2026-03-26 06:03:11.809708 | orchestrator | ok: [testbed-node-3] 2026-03-26 06:03:11.809720 | orchestrator | 2026-03-26 06:03:11.809734 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-26 06:03:11.809747 | orchestrator | Thursday 26 March 2026 06:03:06 +0000 (0:00:01.187) 1:00:30.526 ******** 2026-03-26 06:03:11.809760 | orchestrator | skipping: [testbed-node-3] 2026-03-26 06:03:11.809772 | orchestrator | 2026-03-26 06:03:11.809785 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-26 06:03:11.809797 | orchestrator | Thursday 26 March 2026 06:03:07 +0000 (0:00:01.125) 1:00:31.652 ******** 2026-03-26 06:03:11.809809 | orchestrator | ok: [testbed-node-3] 2026-03-26 06:03:11.809822 | orchestrator | 2026-03-26 06:03:11.809834 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-26 06:03:11.809847 | orchestrator | Thursday 26 March 2026 06:03:09 +0000 (0:00:01.210) 1:00:32.863 ******** 2026-03-26 06:03:11.809860 | orchestrator | skipping: [testbed-node-3] 2026-03-26 06:03:11.809872 | orchestrator | 2026-03-26 06:03:11.809884 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-26 06:03:11.809898 | orchestrator | Thursday 26 March 2026 06:03:10 +0000 (0:00:01.190) 1:00:34.053 ******** 2026-03-26 06:03:11.809911 | orchestrator | ok: [testbed-node-3] 2026-03-26 06:03:11.809924 | orchestrator | 2026-03-26 06:03:11.809937 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-26 06:03:11.809950 | orchestrator | Thursday 26 March 2026 06:03:11 +0000 (0:00:01.166) 1:00:35.219 ******** 2026-03-26 06:03:11.809975 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 06:03:11.809993 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--e2623153--bc41--510f--8884--ef957bb96082-osd--block--e2623153--bc41--510f--8884--ef957bb96082', 'dm-uuid-LVM-8hKVl461SF70Ai5uMDmNdT5BP20Vvkg8AxHs2aTbdloCZd5zRhurro2iqvFnFzRY'], 'uuids': ['c579629d-afc9-41d5-a76c-63e3abbafb40'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '863ba5d2', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['AxHs2a-Tbdl-oCZd-5zRh-urro-2iqv-FnFzRY']}})  2026-03-26 06:03:11.810092 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2dae49df-17cb-48b5-9940-ec5e7ec792d8', 'scsi-SQEMU_QEMU_HARDDISK_2dae49df-17cb-48b5-9940-ec5e7ec792d8'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2dae49df', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-26 06:03:11.810111 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-2XKfyD-kvYx-XaUk-IA1D-OFMu-auWL-FeQHCw', 'scsi-0QEMU_QEMU_HARDDISK_d11e4e4a-db1d-44df-8da9-5de7e993dd80', 'scsi-SQEMU_QEMU_HARDDISK_d11e4e4a-db1d-44df-8da9-5de7e993dd80'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'd11e4e4a', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--93e8c9a2--b6ff--5fe0--a79e--2922336c3e0a-osd--block--93e8c9a2--b6ff--5fe0--a79e--2922336c3e0a']}})  2026-03-26 06:03:11.810130 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 06:03:11.810143 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 06:03:11.810155 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-26-01-38-13-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-26 06:03:11.810175 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 06:03:11.810186 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-y5xuVg-NQ7T-0MWa-shc4-xC7n-HJ3V-UNBCRS', 'dm-uuid-CRYPT-LUKS2-aef43475035b4229a7d71e3432ab4dcb-y5xuVg-NQ7T-0MWa-shc4-xC7n-HJ3V-UNBCRS'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-26 06:03:11.810206 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 06:03:13.342170 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--93e8c9a2--b6ff--5fe0--a79e--2922336c3e0a-osd--block--93e8c9a2--b6ff--5fe0--a79e--2922336c3e0a', 'dm-uuid-LVM-NfuOn4R5AkCZoZBaGfCwjgSejX4qlSlby5xuVgNQ7T0MWashc4xC7nHJ3VUNBCRS'], 'uuids': ['aef43475-035b-4229-a7d7-1e3432ab4dcb'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'd11e4e4a', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['y5xuVg-NQ7T-0MWa-shc4-xC7n-HJ3V-UNBCRS']}})  2026-03-26 06:03:13.342271 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-dxNnp3-HdCF-97hz-w17k-bHEu-opcA-g4y34j', 'scsi-0QEMU_QEMU_HARDDISK_863ba5d2-7e2f-4393-95a6-83543745d331', 'scsi-SQEMU_QEMU_HARDDISK_863ba5d2-7e2f-4393-95a6-83543745d331'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '863ba5d2', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--e2623153--bc41--510f--8884--ef957bb96082-osd--block--e2623153--bc41--510f--8884--ef957bb96082']}})  2026-03-26 06:03:13.342303 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 06:03:13.342322 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce600cf2-62c4-44aa-8248-5535335c6519', 'scsi-SQEMU_QEMU_HARDDISK_ce600cf2-62c4-44aa-8248-5535335c6519'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'ce600cf2', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce600cf2-62c4-44aa-8248-5535335c6519-part16', 'scsi-SQEMU_QEMU_HARDDISK_ce600cf2-62c4-44aa-8248-5535335c6519-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce600cf2-62c4-44aa-8248-5535335c6519-part14', 'scsi-SQEMU_QEMU_HARDDISK_ce600cf2-62c4-44aa-8248-5535335c6519-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce600cf2-62c4-44aa-8248-5535335c6519-part15', 'scsi-SQEMU_QEMU_HARDDISK_ce600cf2-62c4-44aa-8248-5535335c6519-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce600cf2-62c4-44aa-8248-5535335c6519-part1', 'scsi-SQEMU_QEMU_HARDDISK_ce600cf2-62c4-44aa-8248-5535335c6519-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-26 06:03:13.342377 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 06:03:13.342390 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 06:03:13.342403 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-AxHs2a-Tbdl-oCZd-5zRh-urro-2iqv-FnFzRY', 'dm-uuid-CRYPT-LUKS2-c579629dafc941d5a76c63e3abbafb40-AxHs2a-Tbdl-oCZd-5zRh-urro-2iqv-FnFzRY'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-26 06:03:13.342416 | orchestrator | skipping: [testbed-node-3] 2026-03-26 06:03:13.342428 | orchestrator | 2026-03-26 06:03:13.342445 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-26 06:03:13.342472 | orchestrator | Thursday 26 March 2026 06:03:13 +0000 (0:00:01.541) 1:00:36.761 ******** 2026-03-26 06:03:13.342491 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 06:03:13.342532 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--e2623153--bc41--510f--8884--ef957bb96082-osd--block--e2623153--bc41--510f--8884--ef957bb96082', 'dm-uuid-LVM-8hKVl461SF70Ai5uMDmNdT5BP20Vvkg8AxHs2aTbdloCZd5zRhurro2iqvFnFzRY'], 'uuids': ['c579629d-afc9-41d5-a76c-63e3abbafb40'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '863ba5d2', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['AxHs2a-Tbdl-oCZd-5zRh-urro-2iqv-FnFzRY']}}, 'ansible_loop_var': 'item'})  2026-03-26 06:03:13.342551 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2dae49df-17cb-48b5-9940-ec5e7ec792d8', 'scsi-SQEMU_QEMU_HARDDISK_2dae49df-17cb-48b5-9940-ec5e7ec792d8'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2dae49df', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 06:03:13.342633 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-2XKfyD-kvYx-XaUk-IA1D-OFMu-auWL-FeQHCw', 'scsi-0QEMU_QEMU_HARDDISK_d11e4e4a-db1d-44df-8da9-5de7e993dd80', 'scsi-SQEMU_QEMU_HARDDISK_d11e4e4a-db1d-44df-8da9-5de7e993dd80'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'd11e4e4a', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--93e8c9a2--b6ff--5fe0--a79e--2922336c3e0a-osd--block--93e8c9a2--b6ff--5fe0--a79e--2922336c3e0a']}}, 'ansible_loop_var': 'item'})  2026-03-26 06:03:14.540418 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 06:03:14.540539 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 06:03:14.540631 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-26-01-38-13-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 06:03:14.540647 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 06:03:14.540659 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-y5xuVg-NQ7T-0MWa-shc4-xC7n-HJ3V-UNBCRS', 'dm-uuid-CRYPT-LUKS2-aef43475035b4229a7d71e3432ab4dcb-y5xuVg-NQ7T-0MWa-shc4-xC7n-HJ3V-UNBCRS'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 06:03:14.540670 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 06:03:14.540706 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--93e8c9a2--b6ff--5fe0--a79e--2922336c3e0a-osd--block--93e8c9a2--b6ff--5fe0--a79e--2922336c3e0a', 'dm-uuid-LVM-NfuOn4R5AkCZoZBaGfCwjgSejX4qlSlby5xuVgNQ7T0MWashc4xC7nHJ3VUNBCRS'], 'uuids': ['aef43475-035b-4229-a7d7-1e3432ab4dcb'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'd11e4e4a', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['y5xuVg-NQ7T-0MWa-shc4-xC7n-HJ3V-UNBCRS']}}, 'ansible_loop_var': 'item'})  2026-03-26 06:03:14.540720 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-dxNnp3-HdCF-97hz-w17k-bHEu-opcA-g4y34j', 'scsi-0QEMU_QEMU_HARDDISK_863ba5d2-7e2f-4393-95a6-83543745d331', 'scsi-SQEMU_QEMU_HARDDISK_863ba5d2-7e2f-4393-95a6-83543745d331'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '863ba5d2', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--e2623153--bc41--510f--8884--ef957bb96082-osd--block--e2623153--bc41--510f--8884--ef957bb96082']}}, 'ansible_loop_var': 'item'})  2026-03-26 06:03:14.540743 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 06:03:14.540764 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce600cf2-62c4-44aa-8248-5535335c6519', 'scsi-SQEMU_QEMU_HARDDISK_ce600cf2-62c4-44aa-8248-5535335c6519'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'ce600cf2', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce600cf2-62c4-44aa-8248-5535335c6519-part16', 'scsi-SQEMU_QEMU_HARDDISK_ce600cf2-62c4-44aa-8248-5535335c6519-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce600cf2-62c4-44aa-8248-5535335c6519-part14', 'scsi-SQEMU_QEMU_HARDDISK_ce600cf2-62c4-44aa-8248-5535335c6519-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce600cf2-62c4-44aa-8248-5535335c6519-part15', 'scsi-SQEMU_QEMU_HARDDISK_ce600cf2-62c4-44aa-8248-5535335c6519-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ce600cf2-62c4-44aa-8248-5535335c6519-part1', 'scsi-SQEMU_QEMU_HARDDISK_ce600cf2-62c4-44aa-8248-5535335c6519-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 06:03:43.278337 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 06:03:43.278630 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 06:03:43.278668 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-AxHs2a-Tbdl-oCZd-5zRh-urro-2iqv-FnFzRY', 'dm-uuid-CRYPT-LUKS2-c579629dafc941d5a76c63e3abbafb40-AxHs2a-Tbdl-oCZd-5zRh-urro-2iqv-FnFzRY'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 06:03:43.278693 | orchestrator | skipping: [testbed-node-3] 2026-03-26 06:03:43.278716 | orchestrator | 2026-03-26 06:03:43.278736 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-26 06:03:43.278749 | orchestrator | Thursday 26 March 2026 06:03:14 +0000 (0:00:01.432) 1:00:38.193 ******** 2026-03-26 06:03:43.278761 | orchestrator | ok: [testbed-node-3] 2026-03-26 06:03:43.278774 | orchestrator | 2026-03-26 06:03:43.278785 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-26 06:03:43.278795 | orchestrator | Thursday 26 March 2026 06:03:16 +0000 (0:00:01.541) 1:00:39.734 ******** 2026-03-26 06:03:43.278806 | orchestrator | ok: [testbed-node-3] 2026-03-26 06:03:43.278817 | orchestrator | 2026-03-26 06:03:43.278830 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-26 06:03:43.278843 | orchestrator | Thursday 26 March 2026 06:03:17 +0000 (0:00:01.241) 1:00:40.976 ******** 2026-03-26 06:03:43.278856 | orchestrator | ok: [testbed-node-3] 2026-03-26 06:03:43.278868 | orchestrator | 2026-03-26 06:03:43.278881 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-26 06:03:43.278893 | orchestrator | Thursday 26 March 2026 06:03:18 +0000 (0:00:01.543) 1:00:42.520 ******** 2026-03-26 06:03:43.278906 | orchestrator | skipping: [testbed-node-3] 2026-03-26 06:03:43.278918 | orchestrator | 2026-03-26 06:03:43.278930 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-26 06:03:43.278942 | orchestrator | Thursday 26 March 2026 06:03:20 +0000 (0:00:01.218) 1:00:43.739 ******** 2026-03-26 06:03:43.278954 | orchestrator | skipping: [testbed-node-3] 2026-03-26 06:03:43.278966 | orchestrator | 2026-03-26 06:03:43.278979 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-26 06:03:43.278992 | orchestrator | Thursday 26 March 2026 06:03:21 +0000 (0:00:01.265) 1:00:45.005 ******** 2026-03-26 06:03:43.279011 | orchestrator | skipping: [testbed-node-3] 2026-03-26 06:03:43.279032 | orchestrator | 2026-03-26 06:03:43.279052 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-26 06:03:43.279071 | orchestrator | Thursday 26 March 2026 06:03:22 +0000 (0:00:01.155) 1:00:46.160 ******** 2026-03-26 06:03:43.279090 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-03-26 06:03:43.279124 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-03-26 06:03:43.279143 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-03-26 06:03:43.279156 | orchestrator | 2026-03-26 06:03:43.279166 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-26 06:03:43.279177 | orchestrator | Thursday 26 March 2026 06:03:24 +0000 (0:00:02.024) 1:00:48.185 ******** 2026-03-26 06:03:43.279188 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-26 06:03:43.279198 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-26 06:03:43.279210 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-26 06:03:43.279220 | orchestrator | skipping: [testbed-node-3] 2026-03-26 06:03:43.279231 | orchestrator | 2026-03-26 06:03:43.279242 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-26 06:03:43.279253 | orchestrator | Thursday 26 March 2026 06:03:25 +0000 (0:00:01.193) 1:00:49.379 ******** 2026-03-26 06:03:43.279284 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3 2026-03-26 06:03:43.279296 | orchestrator | 2026-03-26 06:03:43.279317 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-26 06:03:43.279330 | orchestrator | Thursday 26 March 2026 06:03:26 +0000 (0:00:01.136) 1:00:50.516 ******** 2026-03-26 06:03:43.279340 | orchestrator | skipping: [testbed-node-3] 2026-03-26 06:03:43.279351 | orchestrator | 2026-03-26 06:03:43.279361 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-26 06:03:43.279372 | orchestrator | Thursday 26 March 2026 06:03:27 +0000 (0:00:01.122) 1:00:51.638 ******** 2026-03-26 06:03:43.279382 | orchestrator | skipping: [testbed-node-3] 2026-03-26 06:03:43.279393 | orchestrator | 2026-03-26 06:03:43.279403 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-26 06:03:43.279414 | orchestrator | Thursday 26 March 2026 06:03:29 +0000 (0:00:01.242) 1:00:52.881 ******** 2026-03-26 06:03:43.279424 | orchestrator | skipping: [testbed-node-3] 2026-03-26 06:03:43.279435 | orchestrator | 2026-03-26 06:03:43.279446 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-26 06:03:43.279456 | orchestrator | Thursday 26 March 2026 06:03:30 +0000 (0:00:01.155) 1:00:54.037 ******** 2026-03-26 06:03:43.279467 | orchestrator | ok: [testbed-node-3] 2026-03-26 06:03:43.279477 | orchestrator | 2026-03-26 06:03:43.279488 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-26 06:03:43.279498 | orchestrator | Thursday 26 March 2026 06:03:31 +0000 (0:00:01.210) 1:00:55.247 ******** 2026-03-26 06:03:43.279509 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-26 06:03:43.279519 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-26 06:03:43.279530 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-26 06:03:43.279540 | orchestrator | skipping: [testbed-node-3] 2026-03-26 06:03:43.279551 | orchestrator | 2026-03-26 06:03:43.279582 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-26 06:03:43.279604 | orchestrator | Thursday 26 March 2026 06:03:33 +0000 (0:00:01.437) 1:00:56.685 ******** 2026-03-26 06:03:43.279623 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-26 06:03:43.279641 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-26 06:03:43.279654 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-26 06:03:43.279665 | orchestrator | skipping: [testbed-node-3] 2026-03-26 06:03:43.279675 | orchestrator | 2026-03-26 06:03:43.279686 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-26 06:03:43.279697 | orchestrator | Thursday 26 March 2026 06:03:34 +0000 (0:00:01.470) 1:00:58.155 ******** 2026-03-26 06:03:43.279707 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-26 06:03:43.279717 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-26 06:03:43.279737 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-26 06:03:43.279747 | orchestrator | skipping: [testbed-node-3] 2026-03-26 06:03:43.279758 | orchestrator | 2026-03-26 06:03:43.279768 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-26 06:03:43.279779 | orchestrator | Thursday 26 March 2026 06:03:35 +0000 (0:00:01.422) 1:00:59.578 ******** 2026-03-26 06:03:43.279789 | orchestrator | ok: [testbed-node-3] 2026-03-26 06:03:43.279800 | orchestrator | 2026-03-26 06:03:43.279810 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-26 06:03:43.279821 | orchestrator | Thursday 26 March 2026 06:03:37 +0000 (0:00:01.176) 1:01:00.755 ******** 2026-03-26 06:03:43.279832 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-26 06:03:43.279842 | orchestrator | 2026-03-26 06:03:43.279853 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-26 06:03:43.279863 | orchestrator | Thursday 26 March 2026 06:03:38 +0000 (0:00:01.330) 1:01:02.085 ******** 2026-03-26 06:03:43.279874 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-26 06:03:43.279885 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-26 06:03:43.279895 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-26 06:03:43.279906 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-26 06:03:43.279916 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-26 06:03:43.279927 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-26 06:03:43.279937 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-26 06:03:43.279948 | orchestrator | 2026-03-26 06:03:43.279959 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-26 06:03:43.279969 | orchestrator | Thursday 26 March 2026 06:03:40 +0000 (0:00:02.193) 1:01:04.278 ******** 2026-03-26 06:03:43.279980 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-26 06:03:43.279990 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-26 06:03:43.280001 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-26 06:03:43.280011 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-26 06:03:43.280022 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-26 06:03:43.280032 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-26 06:03:43.280043 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-26 06:03:43.280053 | orchestrator | 2026-03-26 06:03:43.280071 | orchestrator | TASK [Stop ceph rgw when upgrading from stable-3.2] **************************** 2026-03-26 06:04:35.877621 | orchestrator | Thursday 26 March 2026 06:03:43 +0000 (0:00:02.646) 1:01:06.924 ******** 2026-03-26 06:04:35.877763 | orchestrator | changed: [testbed-node-3] 2026-03-26 06:04:35.877790 | orchestrator | 2026-03-26 06:04:35.877810 | orchestrator | TASK [Stop ceph rgw (pt. 1)] *************************************************** 2026-03-26 06:04:35.877829 | orchestrator | Thursday 26 March 2026 06:03:45 +0000 (0:00:02.315) 1:01:09.240 ******** 2026-03-26 06:04:35.877849 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-26 06:04:35.877868 | orchestrator | 2026-03-26 06:04:35.877884 | orchestrator | TASK [Stop ceph rgw (pt. 2)] *************************************************** 2026-03-26 06:04:35.877902 | orchestrator | Thursday 26 March 2026 06:03:48 +0000 (0:00:02.873) 1:01:12.113 ******** 2026-03-26 06:04:35.877921 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-26 06:04:35.877969 | orchestrator | 2026-03-26 06:04:35.877990 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-26 06:04:35.878006 | orchestrator | Thursday 26 March 2026 06:03:50 +0000 (0:00:02.383) 1:01:14.497 ******** 2026-03-26 06:04:35.878098 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3 2026-03-26 06:04:35.878123 | orchestrator | 2026-03-26 06:04:35.878140 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-26 06:04:35.878157 | orchestrator | Thursday 26 March 2026 06:03:51 +0000 (0:00:01.109) 1:01:15.606 ******** 2026-03-26 06:04:35.878174 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3 2026-03-26 06:04:35.878190 | orchestrator | 2026-03-26 06:04:35.878207 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-26 06:04:35.878225 | orchestrator | Thursday 26 March 2026 06:03:53 +0000 (0:00:01.146) 1:01:16.753 ******** 2026-03-26 06:04:35.878242 | orchestrator | skipping: [testbed-node-3] 2026-03-26 06:04:35.878260 | orchestrator | 2026-03-26 06:04:35.878277 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-26 06:04:35.878292 | orchestrator | Thursday 26 March 2026 06:03:54 +0000 (0:00:01.121) 1:01:17.875 ******** 2026-03-26 06:04:35.878307 | orchestrator | ok: [testbed-node-3] 2026-03-26 06:04:35.878324 | orchestrator | 2026-03-26 06:04:35.878339 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-26 06:04:35.878357 | orchestrator | Thursday 26 March 2026 06:03:55 +0000 (0:00:01.497) 1:01:19.372 ******** 2026-03-26 06:04:35.878373 | orchestrator | ok: [testbed-node-3] 2026-03-26 06:04:35.878390 | orchestrator | 2026-03-26 06:04:35.878408 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-26 06:04:35.878424 | orchestrator | Thursday 26 March 2026 06:03:57 +0000 (0:00:01.557) 1:01:20.930 ******** 2026-03-26 06:04:35.878440 | orchestrator | ok: [testbed-node-3] 2026-03-26 06:04:35.878456 | orchestrator | 2026-03-26 06:04:35.878473 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-26 06:04:35.878491 | orchestrator | Thursday 26 March 2026 06:03:58 +0000 (0:00:01.501) 1:01:22.431 ******** 2026-03-26 06:04:35.878507 | orchestrator | skipping: [testbed-node-3] 2026-03-26 06:04:35.878522 | orchestrator | 2026-03-26 06:04:35.878538 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-26 06:04:35.878555 | orchestrator | Thursday 26 March 2026 06:03:59 +0000 (0:00:01.125) 1:01:23.556 ******** 2026-03-26 06:04:35.878597 | orchestrator | skipping: [testbed-node-3] 2026-03-26 06:04:35.878614 | orchestrator | 2026-03-26 06:04:35.878630 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-26 06:04:35.878647 | orchestrator | Thursday 26 March 2026 06:04:01 +0000 (0:00:01.125) 1:01:24.682 ******** 2026-03-26 06:04:35.878663 | orchestrator | skipping: [testbed-node-3] 2026-03-26 06:04:35.878678 | orchestrator | 2026-03-26 06:04:35.878694 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-26 06:04:35.878710 | orchestrator | Thursday 26 March 2026 06:04:02 +0000 (0:00:01.108) 1:01:25.790 ******** 2026-03-26 06:04:35.878728 | orchestrator | ok: [testbed-node-3] 2026-03-26 06:04:35.878744 | orchestrator | 2026-03-26 06:04:35.878762 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-26 06:04:35.878778 | orchestrator | Thursday 26 March 2026 06:04:03 +0000 (0:00:01.514) 1:01:27.305 ******** 2026-03-26 06:04:35.878794 | orchestrator | ok: [testbed-node-3] 2026-03-26 06:04:35.878809 | orchestrator | 2026-03-26 06:04:35.878826 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-26 06:04:35.878842 | orchestrator | Thursday 26 March 2026 06:04:05 +0000 (0:00:01.503) 1:01:28.809 ******** 2026-03-26 06:04:35.878859 | orchestrator | skipping: [testbed-node-3] 2026-03-26 06:04:35.878876 | orchestrator | 2026-03-26 06:04:35.878891 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-26 06:04:35.878907 | orchestrator | Thursday 26 March 2026 06:04:06 +0000 (0:00:01.183) 1:01:29.993 ******** 2026-03-26 06:04:35.878944 | orchestrator | skipping: [testbed-node-3] 2026-03-26 06:04:35.878960 | orchestrator | 2026-03-26 06:04:35.878976 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-26 06:04:35.878992 | orchestrator | Thursday 26 March 2026 06:04:07 +0000 (0:00:01.100) 1:01:31.093 ******** 2026-03-26 06:04:35.879008 | orchestrator | ok: [testbed-node-3] 2026-03-26 06:04:35.879025 | orchestrator | 2026-03-26 06:04:35.879041 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-26 06:04:35.879056 | orchestrator | Thursday 26 March 2026 06:04:08 +0000 (0:00:01.127) 1:01:32.220 ******** 2026-03-26 06:04:35.879071 | orchestrator | ok: [testbed-node-3] 2026-03-26 06:04:35.879087 | orchestrator | 2026-03-26 06:04:35.879104 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-26 06:04:35.879118 | orchestrator | Thursday 26 March 2026 06:04:09 +0000 (0:00:01.126) 1:01:33.347 ******** 2026-03-26 06:04:35.879133 | orchestrator | ok: [testbed-node-3] 2026-03-26 06:04:35.879149 | orchestrator | 2026-03-26 06:04:35.879196 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-26 06:04:35.879227 | orchestrator | Thursday 26 March 2026 06:04:10 +0000 (0:00:01.224) 1:01:34.571 ******** 2026-03-26 06:04:35.879244 | orchestrator | skipping: [testbed-node-3] 2026-03-26 06:04:35.879260 | orchestrator | 2026-03-26 06:04:35.879276 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-26 06:04:35.879293 | orchestrator | Thursday 26 March 2026 06:04:12 +0000 (0:00:01.119) 1:01:35.691 ******** 2026-03-26 06:04:35.879309 | orchestrator | skipping: [testbed-node-3] 2026-03-26 06:04:35.879326 | orchestrator | 2026-03-26 06:04:35.879341 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-26 06:04:35.879358 | orchestrator | Thursday 26 March 2026 06:04:13 +0000 (0:00:01.142) 1:01:36.833 ******** 2026-03-26 06:04:35.879374 | orchestrator | skipping: [testbed-node-3] 2026-03-26 06:04:35.879389 | orchestrator | 2026-03-26 06:04:35.879405 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-26 06:04:35.879420 | orchestrator | Thursday 26 March 2026 06:04:14 +0000 (0:00:01.139) 1:01:37.973 ******** 2026-03-26 06:04:35.879435 | orchestrator | ok: [testbed-node-3] 2026-03-26 06:04:35.879451 | orchestrator | 2026-03-26 06:04:35.879467 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-26 06:04:35.879482 | orchestrator | Thursday 26 March 2026 06:04:15 +0000 (0:00:01.178) 1:01:39.151 ******** 2026-03-26 06:04:35.879498 | orchestrator | ok: [testbed-node-3] 2026-03-26 06:04:35.879514 | orchestrator | 2026-03-26 06:04:35.879529 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-03-26 06:04:35.879547 | orchestrator | Thursday 26 March 2026 06:04:16 +0000 (0:00:01.186) 1:01:40.338 ******** 2026-03-26 06:04:35.879622 | orchestrator | skipping: [testbed-node-3] 2026-03-26 06:04:35.879642 | orchestrator | 2026-03-26 06:04:35.879660 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-03-26 06:04:35.879676 | orchestrator | Thursday 26 March 2026 06:04:17 +0000 (0:00:01.148) 1:01:41.487 ******** 2026-03-26 06:04:35.879693 | orchestrator | skipping: [testbed-node-3] 2026-03-26 06:04:35.879710 | orchestrator | 2026-03-26 06:04:35.879728 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-03-26 06:04:35.879745 | orchestrator | Thursday 26 March 2026 06:04:18 +0000 (0:00:01.114) 1:01:42.602 ******** 2026-03-26 06:04:35.879761 | orchestrator | skipping: [testbed-node-3] 2026-03-26 06:04:35.879778 | orchestrator | 2026-03-26 06:04:35.879794 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-03-26 06:04:35.879811 | orchestrator | Thursday 26 March 2026 06:04:20 +0000 (0:00:01.242) 1:01:43.844 ******** 2026-03-26 06:04:35.879828 | orchestrator | skipping: [testbed-node-3] 2026-03-26 06:04:35.879846 | orchestrator | 2026-03-26 06:04:35.879865 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-03-26 06:04:35.879882 | orchestrator | Thursday 26 March 2026 06:04:21 +0000 (0:00:01.216) 1:01:45.061 ******** 2026-03-26 06:04:35.879918 | orchestrator | skipping: [testbed-node-3] 2026-03-26 06:04:35.879937 | orchestrator | 2026-03-26 06:04:35.879955 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-03-26 06:04:35.879972 | orchestrator | Thursday 26 March 2026 06:04:22 +0000 (0:00:01.141) 1:01:46.202 ******** 2026-03-26 06:04:35.879989 | orchestrator | skipping: [testbed-node-3] 2026-03-26 06:04:35.880005 | orchestrator | 2026-03-26 06:04:35.880023 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-03-26 06:04:35.880039 | orchestrator | Thursday 26 March 2026 06:04:23 +0000 (0:00:01.129) 1:01:47.331 ******** 2026-03-26 06:04:35.880058 | orchestrator | skipping: [testbed-node-3] 2026-03-26 06:04:35.880075 | orchestrator | 2026-03-26 06:04:35.880090 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-03-26 06:04:35.880101 | orchestrator | Thursday 26 March 2026 06:04:24 +0000 (0:00:01.112) 1:01:48.443 ******** 2026-03-26 06:04:35.880111 | orchestrator | skipping: [testbed-node-3] 2026-03-26 06:04:35.880120 | orchestrator | 2026-03-26 06:04:35.880130 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-03-26 06:04:35.880139 | orchestrator | Thursday 26 March 2026 06:04:25 +0000 (0:00:01.131) 1:01:49.575 ******** 2026-03-26 06:04:35.880149 | orchestrator | skipping: [testbed-node-3] 2026-03-26 06:04:35.880158 | orchestrator | 2026-03-26 06:04:35.880168 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-03-26 06:04:35.880178 | orchestrator | Thursday 26 March 2026 06:04:27 +0000 (0:00:01.103) 1:01:50.679 ******** 2026-03-26 06:04:35.880186 | orchestrator | skipping: [testbed-node-3] 2026-03-26 06:04:35.880194 | orchestrator | 2026-03-26 06:04:35.880201 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-03-26 06:04:35.880209 | orchestrator | Thursday 26 March 2026 06:04:28 +0000 (0:00:01.125) 1:01:51.804 ******** 2026-03-26 06:04:35.880217 | orchestrator | skipping: [testbed-node-3] 2026-03-26 06:04:35.880225 | orchestrator | 2026-03-26 06:04:35.880233 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-03-26 06:04:35.880240 | orchestrator | Thursday 26 March 2026 06:04:29 +0000 (0:00:01.126) 1:01:52.931 ******** 2026-03-26 06:04:35.880248 | orchestrator | skipping: [testbed-node-3] 2026-03-26 06:04:35.880256 | orchestrator | 2026-03-26 06:04:35.880264 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-26 06:04:35.880271 | orchestrator | Thursday 26 March 2026 06:04:30 +0000 (0:00:01.168) 1:01:54.100 ******** 2026-03-26 06:04:35.880300 | orchestrator | ok: [testbed-node-3] 2026-03-26 06:04:35.880309 | orchestrator | 2026-03-26 06:04:35.880317 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-26 06:04:35.880325 | orchestrator | Thursday 26 March 2026 06:04:32 +0000 (0:00:01.900) 1:01:56.000 ******** 2026-03-26 06:04:35.880333 | orchestrator | ok: [testbed-node-3] 2026-03-26 06:04:35.880340 | orchestrator | 2026-03-26 06:04:35.880348 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-26 06:04:35.880356 | orchestrator | Thursday 26 March 2026 06:04:34 +0000 (0:00:02.277) 1:01:58.278 ******** 2026-03-26 06:04:35.880364 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3 2026-03-26 06:04:35.880372 | orchestrator | 2026-03-26 06:04:35.880380 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-26 06:04:35.880403 | orchestrator | Thursday 26 March 2026 06:04:35 +0000 (0:00:01.243) 1:01:59.521 ******** 2026-03-26 06:05:23.175311 | orchestrator | skipping: [testbed-node-3] 2026-03-26 06:05:23.175427 | orchestrator | 2026-03-26 06:05:23.175444 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-26 06:05:23.175457 | orchestrator | Thursday 26 March 2026 06:04:36 +0000 (0:00:01.132) 1:02:00.654 ******** 2026-03-26 06:05:23.175468 | orchestrator | skipping: [testbed-node-3] 2026-03-26 06:05:23.175479 | orchestrator | 2026-03-26 06:05:23.175490 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-26 06:05:23.175528 | orchestrator | Thursday 26 March 2026 06:04:38 +0000 (0:00:01.128) 1:02:01.783 ******** 2026-03-26 06:05:23.175540 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-26 06:05:23.175610 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-26 06:05:23.175623 | orchestrator | 2026-03-26 06:05:23.175634 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-26 06:05:23.175645 | orchestrator | Thursday 26 March 2026 06:04:39 +0000 (0:00:01.787) 1:02:03.571 ******** 2026-03-26 06:05:23.175656 | orchestrator | ok: [testbed-node-3] 2026-03-26 06:05:23.175667 | orchestrator | 2026-03-26 06:05:23.175678 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-26 06:05:23.175689 | orchestrator | Thursday 26 March 2026 06:04:41 +0000 (0:00:01.497) 1:02:05.068 ******** 2026-03-26 06:05:23.175699 | orchestrator | skipping: [testbed-node-3] 2026-03-26 06:05:23.175710 | orchestrator | 2026-03-26 06:05:23.175720 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-26 06:05:23.175731 | orchestrator | Thursday 26 March 2026 06:04:42 +0000 (0:00:01.151) 1:02:06.219 ******** 2026-03-26 06:05:23.175742 | orchestrator | skipping: [testbed-node-3] 2026-03-26 06:05:23.175753 | orchestrator | 2026-03-26 06:05:23.175765 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-26 06:05:23.175776 | orchestrator | Thursday 26 March 2026 06:04:43 +0000 (0:00:01.131) 1:02:07.351 ******** 2026-03-26 06:05:23.175787 | orchestrator | skipping: [testbed-node-3] 2026-03-26 06:05:23.175797 | orchestrator | 2026-03-26 06:05:23.175808 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-26 06:05:23.175821 | orchestrator | Thursday 26 March 2026 06:04:44 +0000 (0:00:01.149) 1:02:08.500 ******** 2026-03-26 06:05:23.175834 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3 2026-03-26 06:05:23.175847 | orchestrator | 2026-03-26 06:05:23.175859 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-26 06:05:23.175872 | orchestrator | Thursday 26 March 2026 06:04:45 +0000 (0:00:01.145) 1:02:09.645 ******** 2026-03-26 06:05:23.175884 | orchestrator | ok: [testbed-node-3] 2026-03-26 06:05:23.175896 | orchestrator | 2026-03-26 06:05:23.175908 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-26 06:05:23.175920 | orchestrator | Thursday 26 March 2026 06:04:47 +0000 (0:00:01.938) 1:02:11.584 ******** 2026-03-26 06:05:23.175932 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-26 06:05:23.175944 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-26 06:05:23.175957 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-26 06:05:23.175969 | orchestrator | skipping: [testbed-node-3] 2026-03-26 06:05:23.175982 | orchestrator | 2026-03-26 06:05:23.175995 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-26 06:05:23.176006 | orchestrator | Thursday 26 March 2026 06:04:49 +0000 (0:00:01.231) 1:02:12.816 ******** 2026-03-26 06:05:23.176017 | orchestrator | skipping: [testbed-node-3] 2026-03-26 06:05:23.176027 | orchestrator | 2026-03-26 06:05:23.176038 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-26 06:05:23.176049 | orchestrator | Thursday 26 March 2026 06:04:50 +0000 (0:00:01.358) 1:02:14.175 ******** 2026-03-26 06:05:23.176059 | orchestrator | skipping: [testbed-node-3] 2026-03-26 06:05:23.176070 | orchestrator | 2026-03-26 06:05:23.176081 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-26 06:05:23.176091 | orchestrator | Thursday 26 March 2026 06:04:51 +0000 (0:00:01.202) 1:02:15.377 ******** 2026-03-26 06:05:23.176102 | orchestrator | skipping: [testbed-node-3] 2026-03-26 06:05:23.176113 | orchestrator | 2026-03-26 06:05:23.176123 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-26 06:05:23.176134 | orchestrator | Thursday 26 March 2026 06:04:52 +0000 (0:00:01.198) 1:02:16.576 ******** 2026-03-26 06:05:23.176153 | orchestrator | skipping: [testbed-node-3] 2026-03-26 06:05:23.176164 | orchestrator | 2026-03-26 06:05:23.176175 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-26 06:05:23.176186 | orchestrator | Thursday 26 March 2026 06:04:54 +0000 (0:00:01.230) 1:02:17.807 ******** 2026-03-26 06:05:23.176196 | orchestrator | skipping: [testbed-node-3] 2026-03-26 06:05:23.176207 | orchestrator | 2026-03-26 06:05:23.176218 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-26 06:05:23.176228 | orchestrator | Thursday 26 March 2026 06:04:55 +0000 (0:00:01.201) 1:02:19.008 ******** 2026-03-26 06:05:23.176239 | orchestrator | ok: [testbed-node-3] 2026-03-26 06:05:23.176249 | orchestrator | 2026-03-26 06:05:23.176260 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-26 06:05:23.176271 | orchestrator | Thursday 26 March 2026 06:04:57 +0000 (0:00:02.504) 1:02:21.513 ******** 2026-03-26 06:05:23.176282 | orchestrator | ok: [testbed-node-3] 2026-03-26 06:05:23.176292 | orchestrator | 2026-03-26 06:05:23.176303 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-26 06:05:23.176314 | orchestrator | Thursday 26 March 2026 06:04:59 +0000 (0:00:01.152) 1:02:22.665 ******** 2026-03-26 06:05:23.176324 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3 2026-03-26 06:05:23.176334 | orchestrator | 2026-03-26 06:05:23.176346 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-26 06:05:23.176374 | orchestrator | Thursday 26 March 2026 06:05:00 +0000 (0:00:01.113) 1:02:23.779 ******** 2026-03-26 06:05:23.176393 | orchestrator | skipping: [testbed-node-3] 2026-03-26 06:05:23.176404 | orchestrator | 2026-03-26 06:05:23.176415 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-26 06:05:23.176426 | orchestrator | Thursday 26 March 2026 06:05:01 +0000 (0:00:01.182) 1:02:24.962 ******** 2026-03-26 06:05:23.176436 | orchestrator | skipping: [testbed-node-3] 2026-03-26 06:05:23.176447 | orchestrator | 2026-03-26 06:05:23.176458 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-26 06:05:23.176468 | orchestrator | Thursday 26 March 2026 06:05:02 +0000 (0:00:01.161) 1:02:26.124 ******** 2026-03-26 06:05:23.176479 | orchestrator | skipping: [testbed-node-3] 2026-03-26 06:05:23.176490 | orchestrator | 2026-03-26 06:05:23.176500 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-26 06:05:23.176511 | orchestrator | Thursday 26 March 2026 06:05:03 +0000 (0:00:01.185) 1:02:27.310 ******** 2026-03-26 06:05:23.176521 | orchestrator | skipping: [testbed-node-3] 2026-03-26 06:05:23.176532 | orchestrator | 2026-03-26 06:05:23.176543 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-26 06:05:23.176577 | orchestrator | Thursday 26 March 2026 06:05:04 +0000 (0:00:01.160) 1:02:28.470 ******** 2026-03-26 06:05:23.176588 | orchestrator | skipping: [testbed-node-3] 2026-03-26 06:05:23.176599 | orchestrator | 2026-03-26 06:05:23.176610 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-26 06:05:23.176621 | orchestrator | Thursday 26 March 2026 06:05:05 +0000 (0:00:01.153) 1:02:29.624 ******** 2026-03-26 06:05:23.176631 | orchestrator | skipping: [testbed-node-3] 2026-03-26 06:05:23.176642 | orchestrator | 2026-03-26 06:05:23.176653 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-26 06:05:23.176663 | orchestrator | Thursday 26 March 2026 06:05:07 +0000 (0:00:01.262) 1:02:30.886 ******** 2026-03-26 06:05:23.176674 | orchestrator | skipping: [testbed-node-3] 2026-03-26 06:05:23.176685 | orchestrator | 2026-03-26 06:05:23.176704 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-26 06:05:23.176723 | orchestrator | Thursday 26 March 2026 06:05:08 +0000 (0:00:01.157) 1:02:32.043 ******** 2026-03-26 06:05:23.176742 | orchestrator | skipping: [testbed-node-3] 2026-03-26 06:05:23.176761 | orchestrator | 2026-03-26 06:05:23.176780 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-26 06:05:23.176811 | orchestrator | Thursday 26 March 2026 06:05:09 +0000 (0:00:01.163) 1:02:33.207 ******** 2026-03-26 06:05:23.176832 | orchestrator | ok: [testbed-node-3] 2026-03-26 06:05:23.176851 | orchestrator | 2026-03-26 06:05:23.176867 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-26 06:05:23.176879 | orchestrator | Thursday 26 March 2026 06:05:10 +0000 (0:00:01.177) 1:02:34.385 ******** 2026-03-26 06:05:23.176889 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3 2026-03-26 06:05:23.176900 | orchestrator | 2026-03-26 06:05:23.176910 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-26 06:05:23.176921 | orchestrator | Thursday 26 March 2026 06:05:11 +0000 (0:00:01.115) 1:02:35.500 ******** 2026-03-26 06:05:23.176932 | orchestrator | ok: [testbed-node-3] => (item=/etc/ceph) 2026-03-26 06:05:23.176943 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/) 2026-03-26 06:05:23.176953 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-03-26 06:05:23.176964 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-03-26 06:05:23.176974 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-03-26 06:05:23.176985 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-03-26 06:05:23.176995 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-03-26 06:05:23.177006 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-03-26 06:05:23.177016 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-26 06:05:23.177027 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-26 06:05:23.177038 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-26 06:05:23.177048 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-26 06:05:23.177059 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-26 06:05:23.177069 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-26 06:05:23.177080 | orchestrator | ok: [testbed-node-3] => (item=/var/run/ceph) 2026-03-26 06:05:23.177090 | orchestrator | ok: [testbed-node-3] => (item=/var/log/ceph) 2026-03-26 06:05:23.177101 | orchestrator | 2026-03-26 06:05:23.177111 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-26 06:05:23.177122 | orchestrator | Thursday 26 March 2026 06:05:18 +0000 (0:00:06.698) 1:02:42.199 ******** 2026-03-26 06:05:23.177133 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3 2026-03-26 06:05:23.177143 | orchestrator | 2026-03-26 06:05:23.177154 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-03-26 06:05:23.177164 | orchestrator | Thursday 26 March 2026 06:05:19 +0000 (0:00:01.155) 1:02:43.354 ******** 2026-03-26 06:05:23.177175 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-26 06:05:23.177202 | orchestrator | 2026-03-26 06:05:23.177213 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-03-26 06:05:23.177234 | orchestrator | Thursday 26 March 2026 06:05:21 +0000 (0:00:01.521) 1:02:44.875 ******** 2026-03-26 06:05:23.177245 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-26 06:05:23.177256 | orchestrator | 2026-03-26 06:05:23.177266 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-26 06:05:23.177292 | orchestrator | Thursday 26 March 2026 06:05:23 +0000 (0:00:01.949) 1:02:46.824 ******** 2026-03-26 06:06:13.920324 | orchestrator | skipping: [testbed-node-3] 2026-03-26 06:06:13.920438 | orchestrator | 2026-03-26 06:06:13.920454 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-26 06:06:13.920468 | orchestrator | Thursday 26 March 2026 06:05:24 +0000 (0:00:01.158) 1:02:47.983 ******** 2026-03-26 06:06:13.920503 | orchestrator | skipping: [testbed-node-3] 2026-03-26 06:06:13.920515 | orchestrator | 2026-03-26 06:06:13.920526 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-26 06:06:13.920538 | orchestrator | Thursday 26 March 2026 06:05:25 +0000 (0:00:01.278) 1:02:49.261 ******** 2026-03-26 06:06:13.920615 | orchestrator | skipping: [testbed-node-3] 2026-03-26 06:06:13.920627 | orchestrator | 2026-03-26 06:06:13.920639 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-26 06:06:13.920650 | orchestrator | Thursday 26 March 2026 06:05:26 +0000 (0:00:01.123) 1:02:50.385 ******** 2026-03-26 06:06:13.920673 | orchestrator | skipping: [testbed-node-3] 2026-03-26 06:06:13.920684 | orchestrator | 2026-03-26 06:06:13.920694 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-26 06:06:13.920706 | orchestrator | Thursday 26 March 2026 06:05:27 +0000 (0:00:01.139) 1:02:51.525 ******** 2026-03-26 06:06:13.920718 | orchestrator | skipping: [testbed-node-3] 2026-03-26 06:06:13.920727 | orchestrator | 2026-03-26 06:06:13.920737 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-26 06:06:13.920749 | orchestrator | Thursday 26 March 2026 06:05:28 +0000 (0:00:01.131) 1:02:52.657 ******** 2026-03-26 06:06:13.920759 | orchestrator | skipping: [testbed-node-3] 2026-03-26 06:06:13.920769 | orchestrator | 2026-03-26 06:06:13.920780 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-26 06:06:13.920792 | orchestrator | Thursday 26 March 2026 06:05:30 +0000 (0:00:01.101) 1:02:53.758 ******** 2026-03-26 06:06:13.920804 | orchestrator | skipping: [testbed-node-3] 2026-03-26 06:06:13.920815 | orchestrator | 2026-03-26 06:06:13.920826 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-26 06:06:13.920838 | orchestrator | Thursday 26 March 2026 06:05:31 +0000 (0:00:01.206) 1:02:54.965 ******** 2026-03-26 06:06:13.920849 | orchestrator | skipping: [testbed-node-3] 2026-03-26 06:06:13.920860 | orchestrator | 2026-03-26 06:06:13.920872 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-26 06:06:13.920884 | orchestrator | Thursday 26 March 2026 06:05:32 +0000 (0:00:01.150) 1:02:56.115 ******** 2026-03-26 06:06:13.920897 | orchestrator | skipping: [testbed-node-3] 2026-03-26 06:06:13.920908 | orchestrator | 2026-03-26 06:06:13.920920 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-26 06:06:13.920932 | orchestrator | Thursday 26 March 2026 06:05:33 +0000 (0:00:01.120) 1:02:57.236 ******** 2026-03-26 06:06:13.920944 | orchestrator | skipping: [testbed-node-3] 2026-03-26 06:06:13.920956 | orchestrator | 2026-03-26 06:06:13.920968 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-26 06:06:13.920980 | orchestrator | Thursday 26 March 2026 06:05:34 +0000 (0:00:01.133) 1:02:58.369 ******** 2026-03-26 06:06:13.920992 | orchestrator | skipping: [testbed-node-3] 2026-03-26 06:06:13.921003 | orchestrator | 2026-03-26 06:06:13.921015 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-26 06:06:13.921028 | orchestrator | Thursday 26 March 2026 06:05:35 +0000 (0:00:01.172) 1:02:59.542 ******** 2026-03-26 06:06:13.921040 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-03-26 06:06:13.921052 | orchestrator | 2026-03-26 06:06:13.921064 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-26 06:06:13.921076 | orchestrator | Thursday 26 March 2026 06:05:40 +0000 (0:00:04.512) 1:03:04.055 ******** 2026-03-26 06:06:13.921088 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-26 06:06:13.921101 | orchestrator | 2026-03-26 06:06:13.921112 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-26 06:06:13.921124 | orchestrator | Thursday 26 March 2026 06:05:41 +0000 (0:00:01.148) 1:03:05.204 ******** 2026-03-26 06:06:13.921139 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}]) 2026-03-26 06:06:13.921167 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}]) 2026-03-26 06:06:13.921179 | orchestrator | 2026-03-26 06:06:13.921190 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-26 06:06:13.921202 | orchestrator | Thursday 26 March 2026 06:05:46 +0000 (0:00:05.259) 1:03:10.463 ******** 2026-03-26 06:06:13.921213 | orchestrator | skipping: [testbed-node-3] 2026-03-26 06:06:13.921224 | orchestrator | 2026-03-26 06:06:13.921234 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-26 06:06:13.921253 | orchestrator | Thursday 26 March 2026 06:05:48 +0000 (0:00:01.309) 1:03:11.773 ******** 2026-03-26 06:06:13.921263 | orchestrator | skipping: [testbed-node-3] 2026-03-26 06:06:13.921274 | orchestrator | 2026-03-26 06:06:13.921284 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-26 06:06:13.921326 | orchestrator | Thursday 26 March 2026 06:05:49 +0000 (0:00:01.196) 1:03:12.970 ******** 2026-03-26 06:06:13.921338 | orchestrator | skipping: [testbed-node-3] 2026-03-26 06:06:13.921349 | orchestrator | 2026-03-26 06:06:13.921359 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-26 06:06:13.921369 | orchestrator | Thursday 26 March 2026 06:05:50 +0000 (0:00:01.248) 1:03:14.218 ******** 2026-03-26 06:06:13.921380 | orchestrator | skipping: [testbed-node-3] 2026-03-26 06:06:13.921390 | orchestrator | 2026-03-26 06:06:13.921400 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-26 06:06:13.921410 | orchestrator | Thursday 26 March 2026 06:05:51 +0000 (0:00:01.127) 1:03:15.346 ******** 2026-03-26 06:06:13.921421 | orchestrator | skipping: [testbed-node-3] 2026-03-26 06:06:13.921431 | orchestrator | 2026-03-26 06:06:13.921442 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-26 06:06:13.921452 | orchestrator | Thursday 26 March 2026 06:05:52 +0000 (0:00:01.210) 1:03:16.557 ******** 2026-03-26 06:06:13.921462 | orchestrator | ok: [testbed-node-3] 2026-03-26 06:06:13.921474 | orchestrator | 2026-03-26 06:06:13.921484 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-26 06:06:13.921494 | orchestrator | Thursday 26 March 2026 06:05:54 +0000 (0:00:01.257) 1:03:17.814 ******** 2026-03-26 06:06:13.921505 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-26 06:06:13.921515 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-26 06:06:13.921525 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-26 06:06:13.921535 | orchestrator | skipping: [testbed-node-3] 2026-03-26 06:06:13.921560 | orchestrator | 2026-03-26 06:06:13.921571 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-26 06:06:13.921582 | orchestrator | Thursday 26 March 2026 06:05:55 +0000 (0:00:01.473) 1:03:19.288 ******** 2026-03-26 06:06:13.921592 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-26 06:06:13.921602 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-26 06:06:13.921613 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-26 06:06:13.921623 | orchestrator | skipping: [testbed-node-3] 2026-03-26 06:06:13.921633 | orchestrator | 2026-03-26 06:06:13.921644 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-26 06:06:13.921654 | orchestrator | Thursday 26 March 2026 06:05:57 +0000 (0:00:01.405) 1:03:20.694 ******** 2026-03-26 06:06:13.921664 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-26 06:06:13.921680 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-26 06:06:13.921691 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-26 06:06:13.921701 | orchestrator | skipping: [testbed-node-3] 2026-03-26 06:06:13.921711 | orchestrator | 2026-03-26 06:06:13.921722 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-26 06:06:13.921732 | orchestrator | Thursday 26 March 2026 06:05:58 +0000 (0:00:01.410) 1:03:22.104 ******** 2026-03-26 06:06:13.921742 | orchestrator | ok: [testbed-node-3] 2026-03-26 06:06:13.921753 | orchestrator | 2026-03-26 06:06:13.921763 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-26 06:06:13.921774 | orchestrator | Thursday 26 March 2026 06:05:59 +0000 (0:00:01.170) 1:03:23.275 ******** 2026-03-26 06:06:13.921784 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-26 06:06:13.921794 | orchestrator | 2026-03-26 06:06:13.921805 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-26 06:06:13.921815 | orchestrator | Thursday 26 March 2026 06:06:00 +0000 (0:00:01.369) 1:03:24.645 ******** 2026-03-26 06:06:13.921825 | orchestrator | ok: [testbed-node-3] 2026-03-26 06:06:13.921835 | orchestrator | 2026-03-26 06:06:13.921845 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-03-26 06:06:13.921855 | orchestrator | Thursday 26 March 2026 06:06:02 +0000 (0:00:01.946) 1:03:26.591 ******** 2026-03-26 06:06:13.921865 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3 2026-03-26 06:06:13.921876 | orchestrator | 2026-03-26 06:06:13.921887 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-03-26 06:06:13.921898 | orchestrator | Thursday 26 March 2026 06:06:04 +0000 (0:00:01.472) 1:03:28.064 ******** 2026-03-26 06:06:13.921908 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-26 06:06:13.921919 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-26 06:06:13.921929 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-26 06:06:13.921939 | orchestrator | 2026-03-26 06:06:13.921950 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-03-26 06:06:13.921960 | orchestrator | Thursday 26 March 2026 06:06:07 +0000 (0:00:03.236) 1:03:31.300 ******** 2026-03-26 06:06:13.921970 | orchestrator | ok: [testbed-node-3] => (item=None) 2026-03-26 06:06:13.921981 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-26 06:06:13.921991 | orchestrator | ok: [testbed-node-3] 2026-03-26 06:06:13.922001 | orchestrator | 2026-03-26 06:06:13.922012 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-03-26 06:06:13.922082 | orchestrator | Thursday 26 March 2026 06:06:09 +0000 (0:00:02.013) 1:03:33.314 ******** 2026-03-26 06:06:13.922092 | orchestrator | skipping: [testbed-node-3] 2026-03-26 06:06:13.922103 | orchestrator | 2026-03-26 06:06:13.922114 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-03-26 06:06:13.922124 | orchestrator | Thursday 26 March 2026 06:06:10 +0000 (0:00:01.169) 1:03:34.484 ******** 2026-03-26 06:06:13.922134 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3 2026-03-26 06:06:13.922146 | orchestrator | 2026-03-26 06:06:13.922156 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-03-26 06:06:13.922166 | orchestrator | Thursday 26 March 2026 06:06:12 +0000 (0:00:01.484) 1:03:35.968 ******** 2026-03-26 06:06:13.922190 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-26 06:07:28.486660 | orchestrator | 2026-03-26 06:07:28.486774 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-03-26 06:07:28.486790 | orchestrator | Thursday 26 March 2026 06:06:13 +0000 (0:00:01.599) 1:03:37.568 ******** 2026-03-26 06:07:28.486802 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-26 06:07:28.486842 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-26 06:07:28.486855 | orchestrator | 2026-03-26 06:07:28.486866 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-03-26 06:07:28.486876 | orchestrator | Thursday 26 March 2026 06:06:19 +0000 (0:00:05.222) 1:03:42.790 ******** 2026-03-26 06:07:28.486887 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-26 06:07:28.486899 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-26 06:07:28.486912 | orchestrator | 2026-03-26 06:07:28.486929 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-03-26 06:07:28.486948 | orchestrator | Thursday 26 March 2026 06:06:22 +0000 (0:00:03.171) 1:03:45.961 ******** 2026-03-26 06:07:28.486966 | orchestrator | ok: [testbed-node-3] => (item=None) 2026-03-26 06:07:28.486985 | orchestrator | ok: [testbed-node-3] 2026-03-26 06:07:28.487004 | orchestrator | 2026-03-26 06:07:28.487022 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-03-26 06:07:28.487039 | orchestrator | Thursday 26 March 2026 06:06:24 +0000 (0:00:02.040) 1:03:48.002 ******** 2026-03-26 06:07:28.487058 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-03-26 06:07:28.487077 | orchestrator | 2026-03-26 06:07:28.487094 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-03-26 06:07:28.487111 | orchestrator | Thursday 26 March 2026 06:06:25 +0000 (0:00:01.630) 1:03:49.633 ******** 2026-03-26 06:07:28.487122 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-26 06:07:28.487134 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-26 06:07:28.487145 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-26 06:07:28.487158 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-26 06:07:28.487172 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-26 06:07:28.487184 | orchestrator | skipping: [testbed-node-3] 2026-03-26 06:07:28.487196 | orchestrator | 2026-03-26 06:07:28.487210 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-03-26 06:07:28.487223 | orchestrator | Thursday 26 March 2026 06:06:27 +0000 (0:00:01.612) 1:03:51.245 ******** 2026-03-26 06:07:28.487235 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-26 06:07:28.487246 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-26 06:07:28.487256 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-26 06:07:28.487267 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-26 06:07:28.487278 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-26 06:07:28.487288 | orchestrator | skipping: [testbed-node-3] 2026-03-26 06:07:28.487299 | orchestrator | 2026-03-26 06:07:28.487310 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-03-26 06:07:28.487321 | orchestrator | Thursday 26 March 2026 06:06:29 +0000 (0:00:01.666) 1:03:52.912 ******** 2026-03-26 06:07:28.487332 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-26 06:07:28.487354 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-26 06:07:28.487366 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-26 06:07:28.487376 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-26 06:07:28.487389 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-26 06:07:28.487400 | orchestrator | 2026-03-26 06:07:28.487426 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-03-26 06:07:28.487456 | orchestrator | Thursday 26 March 2026 06:07:00 +0000 (0:00:31.259) 1:04:24.172 ******** 2026-03-26 06:07:28.487467 | orchestrator | skipping: [testbed-node-3] 2026-03-26 06:07:28.487478 | orchestrator | 2026-03-26 06:07:28.487489 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-03-26 06:07:28.487500 | orchestrator | Thursday 26 March 2026 06:07:01 +0000 (0:00:01.146) 1:04:25.319 ******** 2026-03-26 06:07:28.487511 | orchestrator | skipping: [testbed-node-3] 2026-03-26 06:07:28.487521 | orchestrator | 2026-03-26 06:07:28.487608 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-03-26 06:07:28.487624 | orchestrator | Thursday 26 March 2026 06:07:02 +0000 (0:00:01.146) 1:04:26.465 ******** 2026-03-26 06:07:28.487634 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3 2026-03-26 06:07:28.487645 | orchestrator | 2026-03-26 06:07:28.487656 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-03-26 06:07:28.487666 | orchestrator | Thursday 26 March 2026 06:07:04 +0000 (0:00:01.491) 1:04:27.957 ******** 2026-03-26 06:07:28.487677 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3 2026-03-26 06:07:28.487688 | orchestrator | 2026-03-26 06:07:28.487699 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-03-26 06:07:28.487709 | orchestrator | Thursday 26 March 2026 06:07:05 +0000 (0:00:01.521) 1:04:29.478 ******** 2026-03-26 06:07:28.487720 | orchestrator | ok: [testbed-node-3] 2026-03-26 06:07:28.487731 | orchestrator | 2026-03-26 06:07:28.487742 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-03-26 06:07:28.487752 | orchestrator | Thursday 26 March 2026 06:07:07 +0000 (0:00:02.099) 1:04:31.577 ******** 2026-03-26 06:07:28.487763 | orchestrator | ok: [testbed-node-3] 2026-03-26 06:07:28.487773 | orchestrator | 2026-03-26 06:07:28.487784 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-03-26 06:07:28.487795 | orchestrator | Thursday 26 March 2026 06:07:09 +0000 (0:00:02.027) 1:04:33.605 ******** 2026-03-26 06:07:28.487806 | orchestrator | ok: [testbed-node-3] 2026-03-26 06:07:28.487817 | orchestrator | 2026-03-26 06:07:28.487827 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-03-26 06:07:28.487838 | orchestrator | Thursday 26 March 2026 06:07:12 +0000 (0:00:02.309) 1:04:35.914 ******** 2026-03-26 06:07:28.487849 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-26 06:07:28.487860 | orchestrator | 2026-03-26 06:07:28.487870 | orchestrator | PLAY [Upgrade ceph rgws cluster] *********************************************** 2026-03-26 06:07:28.487881 | orchestrator | 2026-03-26 06:07:28.487892 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-26 06:07:28.487902 | orchestrator | Thursday 26 March 2026 06:07:15 +0000 (0:00:03.155) 1:04:39.070 ******** 2026-03-26 06:07:28.487913 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-4 2026-03-26 06:07:28.487924 | orchestrator | 2026-03-26 06:07:28.487943 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-26 06:07:28.487954 | orchestrator | Thursday 26 March 2026 06:07:16 +0000 (0:00:01.119) 1:04:40.189 ******** 2026-03-26 06:07:28.487965 | orchestrator | ok: [testbed-node-4] 2026-03-26 06:07:28.487975 | orchestrator | 2026-03-26 06:07:28.487986 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-26 06:07:28.487997 | orchestrator | Thursday 26 March 2026 06:07:17 +0000 (0:00:01.429) 1:04:41.619 ******** 2026-03-26 06:07:28.488007 | orchestrator | ok: [testbed-node-4] 2026-03-26 06:07:28.488018 | orchestrator | 2026-03-26 06:07:28.488029 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-26 06:07:28.488039 | orchestrator | Thursday 26 March 2026 06:07:19 +0000 (0:00:01.150) 1:04:42.770 ******** 2026-03-26 06:07:28.488050 | orchestrator | ok: [testbed-node-4] 2026-03-26 06:07:28.488060 | orchestrator | 2026-03-26 06:07:28.488071 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-26 06:07:28.488082 | orchestrator | Thursday 26 March 2026 06:07:20 +0000 (0:00:01.465) 1:04:44.236 ******** 2026-03-26 06:07:28.488092 | orchestrator | ok: [testbed-node-4] 2026-03-26 06:07:28.488103 | orchestrator | 2026-03-26 06:07:28.488114 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-26 06:07:28.488124 | orchestrator | Thursday 26 March 2026 06:07:21 +0000 (0:00:01.143) 1:04:45.379 ******** 2026-03-26 06:07:28.488135 | orchestrator | ok: [testbed-node-4] 2026-03-26 06:07:28.488146 | orchestrator | 2026-03-26 06:07:28.488157 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-26 06:07:28.488167 | orchestrator | Thursday 26 March 2026 06:07:22 +0000 (0:00:01.125) 1:04:46.505 ******** 2026-03-26 06:07:28.488178 | orchestrator | ok: [testbed-node-4] 2026-03-26 06:07:28.488188 | orchestrator | 2026-03-26 06:07:28.488199 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-26 06:07:28.488210 | orchestrator | Thursday 26 March 2026 06:07:24 +0000 (0:00:01.207) 1:04:47.713 ******** 2026-03-26 06:07:28.488220 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:07:28.488231 | orchestrator | 2026-03-26 06:07:28.488242 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-26 06:07:28.488252 | orchestrator | Thursday 26 March 2026 06:07:25 +0000 (0:00:01.143) 1:04:48.857 ******** 2026-03-26 06:07:28.488263 | orchestrator | ok: [testbed-node-4] 2026-03-26 06:07:28.488274 | orchestrator | 2026-03-26 06:07:28.488285 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-26 06:07:28.488295 | orchestrator | Thursday 26 March 2026 06:07:26 +0000 (0:00:01.132) 1:04:49.990 ******** 2026-03-26 06:07:28.488306 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-26 06:07:28.488317 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-26 06:07:28.488327 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-26 06:07:28.488338 | orchestrator | 2026-03-26 06:07:28.488354 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-26 06:07:28.488373 | orchestrator | Thursday 26 March 2026 06:07:28 +0000 (0:00:02.144) 1:04:52.134 ******** 2026-03-26 06:07:53.802199 | orchestrator | ok: [testbed-node-4] 2026-03-26 06:07:53.802314 | orchestrator | 2026-03-26 06:07:53.802330 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-26 06:07:53.802343 | orchestrator | Thursday 26 March 2026 06:07:29 +0000 (0:00:01.245) 1:04:53.379 ******** 2026-03-26 06:07:53.802354 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-26 06:07:53.802366 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-26 06:07:53.802377 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-26 06:07:53.802387 | orchestrator | 2026-03-26 06:07:53.802398 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-26 06:07:53.802436 | orchestrator | Thursday 26 March 2026 06:07:32 +0000 (0:00:02.918) 1:04:56.298 ******** 2026-03-26 06:07:53.802448 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-26 06:07:53.802459 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-26 06:07:53.802469 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-26 06:07:53.802480 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:07:53.802491 | orchestrator | 2026-03-26 06:07:53.802501 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-26 06:07:53.802512 | orchestrator | Thursday 26 March 2026 06:07:34 +0000 (0:00:01.492) 1:04:57.791 ******** 2026-03-26 06:07:53.802524 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-26 06:07:53.802588 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-26 06:07:53.802601 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-26 06:07:53.802612 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:07:53.802623 | orchestrator | 2026-03-26 06:07:53.802634 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-26 06:07:53.802645 | orchestrator | Thursday 26 March 2026 06:07:35 +0000 (0:00:01.734) 1:04:59.526 ******** 2026-03-26 06:07:53.802659 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-26 06:07:53.802673 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-26 06:07:53.802688 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-26 06:07:53.802702 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:07:53.802715 | orchestrator | 2026-03-26 06:07:53.802728 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-26 06:07:53.802740 | orchestrator | Thursday 26 March 2026 06:07:37 +0000 (0:00:01.227) 1:05:00.753 ******** 2026-03-26 06:07:53.802787 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': 'de9c3b4c4c57', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-26 06:07:30.238507', 'end': '2026-03-26 06:07:30.291933', 'delta': '0:00:00.053426', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['de9c3b4c4c57'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-26 06:07:53.802815 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': 'd66b87272f8e', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-26 06:07:30.798086', 'end': '2026-03-26 06:07:30.854639', 'delta': '0:00:00.056553', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['d66b87272f8e'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-26 06:07:53.802830 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': 'b850f8fd4697', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-26 06:07:31.399026', 'end': '2026-03-26 06:07:31.431938', 'delta': '0:00:00.032912', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['b850f8fd4697'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-26 06:07:53.802842 | orchestrator | 2026-03-26 06:07:53.802853 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-26 06:07:53.802865 | orchestrator | Thursday 26 March 2026 06:07:38 +0000 (0:00:01.202) 1:05:01.955 ******** 2026-03-26 06:07:53.802876 | orchestrator | ok: [testbed-node-4] 2026-03-26 06:07:53.802887 | orchestrator | 2026-03-26 06:07:53.802898 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-26 06:07:53.802909 | orchestrator | Thursday 26 March 2026 06:07:39 +0000 (0:00:01.272) 1:05:03.228 ******** 2026-03-26 06:07:53.802920 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:07:53.802931 | orchestrator | 2026-03-26 06:07:53.802942 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-26 06:07:53.802953 | orchestrator | Thursday 26 March 2026 06:07:40 +0000 (0:00:01.269) 1:05:04.497 ******** 2026-03-26 06:07:53.802964 | orchestrator | ok: [testbed-node-4] 2026-03-26 06:07:53.802974 | orchestrator | 2026-03-26 06:07:53.802985 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-26 06:07:53.802996 | orchestrator | Thursday 26 March 2026 06:07:41 +0000 (0:00:01.151) 1:05:05.648 ******** 2026-03-26 06:07:53.803007 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-03-26 06:07:53.803018 | orchestrator | 2026-03-26 06:07:53.803028 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-26 06:07:53.803039 | orchestrator | Thursday 26 March 2026 06:07:43 +0000 (0:00:01.996) 1:05:07.645 ******** 2026-03-26 06:07:53.803050 | orchestrator | ok: [testbed-node-4] 2026-03-26 06:07:53.803061 | orchestrator | 2026-03-26 06:07:53.803071 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-26 06:07:53.803082 | orchestrator | Thursday 26 March 2026 06:07:45 +0000 (0:00:01.162) 1:05:08.808 ******** 2026-03-26 06:07:53.803093 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:07:53.803104 | orchestrator | 2026-03-26 06:07:53.803115 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-26 06:07:53.803126 | orchestrator | Thursday 26 March 2026 06:07:46 +0000 (0:00:01.121) 1:05:09.929 ******** 2026-03-26 06:07:53.803136 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:07:53.803147 | orchestrator | 2026-03-26 06:07:53.803158 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-26 06:07:53.803176 | orchestrator | Thursday 26 March 2026 06:07:48 +0000 (0:00:01.740) 1:05:11.669 ******** 2026-03-26 06:07:53.803187 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:07:53.803197 | orchestrator | 2026-03-26 06:07:53.803208 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-26 06:07:53.803219 | orchestrator | Thursday 26 March 2026 06:07:49 +0000 (0:00:01.177) 1:05:12.847 ******** 2026-03-26 06:07:53.803229 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:07:53.803240 | orchestrator | 2026-03-26 06:07:53.803250 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-26 06:07:53.803261 | orchestrator | Thursday 26 March 2026 06:07:50 +0000 (0:00:01.128) 1:05:13.975 ******** 2026-03-26 06:07:53.803271 | orchestrator | ok: [testbed-node-4] 2026-03-26 06:07:53.803282 | orchestrator | 2026-03-26 06:07:53.803293 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-26 06:07:53.803303 | orchestrator | Thursday 26 March 2026 06:07:51 +0000 (0:00:01.151) 1:05:15.127 ******** 2026-03-26 06:07:53.803314 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:07:53.803324 | orchestrator | 2026-03-26 06:07:53.803335 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-26 06:07:53.803346 | orchestrator | Thursday 26 March 2026 06:07:52 +0000 (0:00:01.147) 1:05:16.274 ******** 2026-03-26 06:07:53.803361 | orchestrator | ok: [testbed-node-4] 2026-03-26 06:07:53.803372 | orchestrator | 2026-03-26 06:07:53.803383 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-26 06:07:53.803401 | orchestrator | Thursday 26 March 2026 06:07:53 +0000 (0:00:01.172) 1:05:17.447 ******** 2026-03-26 06:07:56.400301 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:07:56.400397 | orchestrator | 2026-03-26 06:07:56.400410 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-26 06:07:56.400421 | orchestrator | Thursday 26 March 2026 06:07:55 +0000 (0:00:01.213) 1:05:18.661 ******** 2026-03-26 06:07:56.400431 | orchestrator | ok: [testbed-node-4] 2026-03-26 06:07:56.400440 | orchestrator | 2026-03-26 06:07:56.400449 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-26 06:07:56.400458 | orchestrator | Thursday 26 March 2026 06:07:56 +0000 (0:00:01.163) 1:05:19.824 ******** 2026-03-26 06:07:56.400469 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 06:07:56.400483 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--b5eee7c3--8883--5bbe--be5a--75726e822543-osd--block--b5eee7c3--8883--5bbe--be5a--75726e822543', 'dm-uuid-LVM-O1aEkSX5V2TgXKGnqX2peNd9dQhi04NAZJyEqlgfRLjtJKN8JwRgDI1ZPO4R3wgt'], 'uuids': ['1d39f6c5-1f6c-4630-99cd-a410ca5e45d8'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'a52ec37c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['ZJyEql-gfRL-jtJK-N8Jw-RgDI-1ZPO-4R3wgt']}})  2026-03-26 06:07:56.400496 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7e352b46-e023-45cf-8a88-51cc46240a44', 'scsi-SQEMU_QEMU_HARDDISK_7e352b46-e023-45cf-8a88-51cc46240a44'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '7e352b46', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-26 06:07:56.400571 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-eoBjP8-dDdJ-3FQm-pH7P-5B72-c1L3-mABWfX', 'scsi-0QEMU_QEMU_HARDDISK_7db5f133-fe7b-42a4-ad57-b076dc1856ab', 'scsi-SQEMU_QEMU_HARDDISK_7db5f133-fe7b-42a4-ad57-b076dc1856ab'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '7db5f133', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--a652979e--9f40--503a--bbc8--6de5e605991e-osd--block--a652979e--9f40--503a--bbc8--6de5e605991e']}})  2026-03-26 06:07:56.400585 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 06:07:56.400595 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 06:07:56.400634 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-26-01-38-09-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-26 06:07:56.400645 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 06:07:56.400654 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-2h2U7S-LeR6-PGwr-u1xY-81U9-rrCs-8siESG', 'dm-uuid-CRYPT-LUKS2-741ece0a80b8415aa2e2dcc695db5f53-2h2U7S-LeR6-PGwr-u1xY-81U9-rrCs-8siESG'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-26 06:07:56.400664 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 06:07:56.400673 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--a652979e--9f40--503a--bbc8--6de5e605991e-osd--block--a652979e--9f40--503a--bbc8--6de5e605991e', 'dm-uuid-LVM-86WEu6duX2Pejl3asW6viK3fsh4aqvqg2h2U7SLeR6PGwru1xY81U9rrCs8siESG'], 'uuids': ['741ece0a-80b8-415a-a2e2-dcc695db5f53'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '7db5f133', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['2h2U7S-LeR6-PGwr-u1xY-81U9-rrCs-8siESG']}})  2026-03-26 06:07:56.400689 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-Oy69b4-OcVV-F2KD-vi5G-C8ns-n3Cu-1PhYTB', 'scsi-0QEMU_QEMU_HARDDISK_a52ec37c-b4ea-4f83-9b16-3c0f6ce85263', 'scsi-SQEMU_QEMU_HARDDISK_a52ec37c-b4ea-4f83-9b16-3c0f6ce85263'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'a52ec37c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--b5eee7c3--8883--5bbe--be5a--75726e822543-osd--block--b5eee7c3--8883--5bbe--be5a--75726e822543']}})  2026-03-26 06:07:56.400698 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 06:07:56.400723 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48d73a84-835d-480a-92c3-3edf7ed142ea', 'scsi-SQEMU_QEMU_HARDDISK_48d73a84-835d-480a-92c3-3edf7ed142ea'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '48d73a84', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48d73a84-835d-480a-92c3-3edf7ed142ea-part16', 'scsi-SQEMU_QEMU_HARDDISK_48d73a84-835d-480a-92c3-3edf7ed142ea-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48d73a84-835d-480a-92c3-3edf7ed142ea-part14', 'scsi-SQEMU_QEMU_HARDDISK_48d73a84-835d-480a-92c3-3edf7ed142ea-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48d73a84-835d-480a-92c3-3edf7ed142ea-part15', 'scsi-SQEMU_QEMU_HARDDISK_48d73a84-835d-480a-92c3-3edf7ed142ea-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48d73a84-835d-480a-92c3-3edf7ed142ea-part1', 'scsi-SQEMU_QEMU_HARDDISK_48d73a84-835d-480a-92c3-3edf7ed142ea-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-26 06:07:57.730944 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 06:07:57.731064 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 06:07:57.731090 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ZJyEql-gfRL-jtJK-N8Jw-RgDI-1ZPO-4R3wgt', 'dm-uuid-CRYPT-LUKS2-1d39f6c51f6c463099cda410ca5e45d8-ZJyEql-gfRL-jtJK-N8Jw-RgDI-1ZPO-4R3wgt'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-26 06:07:57.731105 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:07:57.731118 | orchestrator | 2026-03-26 06:07:57.731128 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-26 06:07:57.731139 | orchestrator | Thursday 26 March 2026 06:07:57 +0000 (0:00:01.331) 1:05:21.155 ******** 2026-03-26 06:07:57.731168 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 06:07:57.731181 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--b5eee7c3--8883--5bbe--be5a--75726e822543-osd--block--b5eee7c3--8883--5bbe--be5a--75726e822543', 'dm-uuid-LVM-O1aEkSX5V2TgXKGnqX2peNd9dQhi04NAZJyEqlgfRLjtJKN8JwRgDI1ZPO4R3wgt'], 'uuids': ['1d39f6c5-1f6c-4630-99cd-a410ca5e45d8'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'a52ec37c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['ZJyEql-gfRL-jtJK-N8Jw-RgDI-1ZPO-4R3wgt']}}, 'ansible_loop_var': 'item'})  2026-03-26 06:07:57.731200 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7e352b46-e023-45cf-8a88-51cc46240a44', 'scsi-SQEMU_QEMU_HARDDISK_7e352b46-e023-45cf-8a88-51cc46240a44'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '7e352b46', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 06:07:57.731267 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-eoBjP8-dDdJ-3FQm-pH7P-5B72-c1L3-mABWfX', 'scsi-0QEMU_QEMU_HARDDISK_7db5f133-fe7b-42a4-ad57-b076dc1856ab', 'scsi-SQEMU_QEMU_HARDDISK_7db5f133-fe7b-42a4-ad57-b076dc1856ab'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '7db5f133', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--a652979e--9f40--503a--bbc8--6de5e605991e-osd--block--a652979e--9f40--503a--bbc8--6de5e605991e']}}, 'ansible_loop_var': 'item'})  2026-03-26 06:07:57.731292 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 06:07:57.731310 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 06:07:57.731336 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-26-01-38-09-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 06:07:57.731355 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 06:07:57.731382 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-2h2U7S-LeR6-PGwr-u1xY-81U9-rrCs-8siESG', 'dm-uuid-CRYPT-LUKS2-741ece0a80b8415aa2e2dcc695db5f53-2h2U7S-LeR6-PGwr-u1xY-81U9-rrCs-8siESG'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 06:08:03.095665 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 06:08:03.095783 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--a652979e--9f40--503a--bbc8--6de5e605991e-osd--block--a652979e--9f40--503a--bbc8--6de5e605991e', 'dm-uuid-LVM-86WEu6duX2Pejl3asW6viK3fsh4aqvqg2h2U7SLeR6PGwru1xY81U9rrCs8siESG'], 'uuids': ['741ece0a-80b8-415a-a2e2-dcc695db5f53'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '7db5f133', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['2h2U7S-LeR6-PGwr-u1xY-81U9-rrCs-8siESG']}}, 'ansible_loop_var': 'item'})  2026-03-26 06:08:03.095818 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-Oy69b4-OcVV-F2KD-vi5G-C8ns-n3Cu-1PhYTB', 'scsi-0QEMU_QEMU_HARDDISK_a52ec37c-b4ea-4f83-9b16-3c0f6ce85263', 'scsi-SQEMU_QEMU_HARDDISK_a52ec37c-b4ea-4f83-9b16-3c0f6ce85263'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'a52ec37c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--b5eee7c3--8883--5bbe--be5a--75726e822543-osd--block--b5eee7c3--8883--5bbe--be5a--75726e822543']}}, 'ansible_loop_var': 'item'})  2026-03-26 06:08:03.095836 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 06:08:03.095870 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48d73a84-835d-480a-92c3-3edf7ed142ea', 'scsi-SQEMU_QEMU_HARDDISK_48d73a84-835d-480a-92c3-3edf7ed142ea'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '48d73a84', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48d73a84-835d-480a-92c3-3edf7ed142ea-part16', 'scsi-SQEMU_QEMU_HARDDISK_48d73a84-835d-480a-92c3-3edf7ed142ea-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48d73a84-835d-480a-92c3-3edf7ed142ea-part14', 'scsi-SQEMU_QEMU_HARDDISK_48d73a84-835d-480a-92c3-3edf7ed142ea-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48d73a84-835d-480a-92c3-3edf7ed142ea-part15', 'scsi-SQEMU_QEMU_HARDDISK_48d73a84-835d-480a-92c3-3edf7ed142ea-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48d73a84-835d-480a-92c3-3edf7ed142ea-part1', 'scsi-SQEMU_QEMU_HARDDISK_48d73a84-835d-480a-92c3-3edf7ed142ea-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 06:08:03.095906 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 06:08:03.095924 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 06:08:03.095937 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ZJyEql-gfRL-jtJK-N8Jw-RgDI-1ZPO-4R3wgt', 'dm-uuid-CRYPT-LUKS2-1d39f6c51f6c463099cda410ca5e45d8-ZJyEql-gfRL-jtJK-N8Jw-RgDI-1ZPO-4R3wgt'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 06:08:03.095959 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:08:03.095973 | orchestrator | 2026-03-26 06:08:03.095985 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-26 06:08:03.095998 | orchestrator | Thursday 26 March 2026 06:07:58 +0000 (0:00:01.444) 1:05:22.600 ******** 2026-03-26 06:08:03.096009 | orchestrator | ok: [testbed-node-4] 2026-03-26 06:08:03.096020 | orchestrator | 2026-03-26 06:08:03.096031 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-26 06:08:03.096042 | orchestrator | Thursday 26 March 2026 06:08:00 +0000 (0:00:01.517) 1:05:24.119 ******** 2026-03-26 06:08:03.096053 | orchestrator | ok: [testbed-node-4] 2026-03-26 06:08:03.096064 | orchestrator | 2026-03-26 06:08:03.096075 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-26 06:08:03.096086 | orchestrator | Thursday 26 March 2026 06:08:01 +0000 (0:00:01.150) 1:05:25.270 ******** 2026-03-26 06:08:03.096096 | orchestrator | ok: [testbed-node-4] 2026-03-26 06:08:03.096107 | orchestrator | 2026-03-26 06:08:03.096118 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-26 06:08:03.096136 | orchestrator | Thursday 26 March 2026 06:08:03 +0000 (0:00:01.472) 1:05:26.743 ******** 2026-03-26 06:08:45.780880 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:08:45.781004 | orchestrator | 2026-03-26 06:08:45.781021 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-26 06:08:45.781034 | orchestrator | Thursday 26 March 2026 06:08:04 +0000 (0:00:01.310) 1:05:28.054 ******** 2026-03-26 06:08:45.781045 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:08:45.781056 | orchestrator | 2026-03-26 06:08:45.781067 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-26 06:08:45.781078 | orchestrator | Thursday 26 March 2026 06:08:05 +0000 (0:00:01.315) 1:05:29.369 ******** 2026-03-26 06:08:45.781089 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:08:45.781100 | orchestrator | 2026-03-26 06:08:45.781111 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-26 06:08:45.781122 | orchestrator | Thursday 26 March 2026 06:08:06 +0000 (0:00:01.125) 1:05:30.495 ******** 2026-03-26 06:08:45.781133 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-03-26 06:08:45.781145 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-03-26 06:08:45.781156 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-03-26 06:08:45.781166 | orchestrator | 2026-03-26 06:08:45.781177 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-26 06:08:45.781188 | orchestrator | Thursday 26 March 2026 06:08:08 +0000 (0:00:01.668) 1:05:32.164 ******** 2026-03-26 06:08:45.781199 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-26 06:08:45.781210 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-26 06:08:45.781221 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-26 06:08:45.781232 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:08:45.781243 | orchestrator | 2026-03-26 06:08:45.781254 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-26 06:08:45.781265 | orchestrator | Thursday 26 March 2026 06:08:10 +0000 (0:00:01.664) 1:05:33.829 ******** 2026-03-26 06:08:45.781276 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-4 2026-03-26 06:08:45.781287 | orchestrator | 2026-03-26 06:08:45.781299 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-26 06:08:45.781312 | orchestrator | Thursday 26 March 2026 06:08:11 +0000 (0:00:01.166) 1:05:34.996 ******** 2026-03-26 06:08:45.781322 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:08:45.781333 | orchestrator | 2026-03-26 06:08:45.781344 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-26 06:08:45.781355 | orchestrator | Thursday 26 March 2026 06:08:12 +0000 (0:00:01.143) 1:05:36.139 ******** 2026-03-26 06:08:45.781391 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:08:45.781403 | orchestrator | 2026-03-26 06:08:45.781414 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-26 06:08:45.781426 | orchestrator | Thursday 26 March 2026 06:08:13 +0000 (0:00:01.134) 1:05:37.274 ******** 2026-03-26 06:08:45.781454 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:08:45.781468 | orchestrator | 2026-03-26 06:08:45.781482 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-26 06:08:45.781494 | orchestrator | Thursday 26 March 2026 06:08:14 +0000 (0:00:01.136) 1:05:38.410 ******** 2026-03-26 06:08:45.781506 | orchestrator | ok: [testbed-node-4] 2026-03-26 06:08:45.781566 | orchestrator | 2026-03-26 06:08:45.781582 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-26 06:08:45.781594 | orchestrator | Thursday 26 March 2026 06:08:16 +0000 (0:00:01.297) 1:05:39.708 ******** 2026-03-26 06:08:45.781607 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-26 06:08:45.781620 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-26 06:08:45.781632 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-26 06:08:45.781645 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:08:45.781658 | orchestrator | 2026-03-26 06:08:45.781670 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-26 06:08:45.781682 | orchestrator | Thursday 26 March 2026 06:08:17 +0000 (0:00:01.393) 1:05:41.101 ******** 2026-03-26 06:08:45.781694 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-26 06:08:45.781707 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-26 06:08:45.781719 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-26 06:08:45.781732 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:08:45.781743 | orchestrator | 2026-03-26 06:08:45.781756 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-26 06:08:45.781767 | orchestrator | Thursday 26 March 2026 06:08:18 +0000 (0:00:01.517) 1:05:42.619 ******** 2026-03-26 06:08:45.781780 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-26 06:08:45.781792 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-26 06:08:45.781804 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-26 06:08:45.781816 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:08:45.781827 | orchestrator | 2026-03-26 06:08:45.781838 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-26 06:08:45.781848 | orchestrator | Thursday 26 March 2026 06:08:20 +0000 (0:00:01.362) 1:05:43.982 ******** 2026-03-26 06:08:45.781859 | orchestrator | ok: [testbed-node-4] 2026-03-26 06:08:45.781870 | orchestrator | 2026-03-26 06:08:45.781880 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-26 06:08:45.781891 | orchestrator | Thursday 26 March 2026 06:08:21 +0000 (0:00:00.921) 1:05:44.903 ******** 2026-03-26 06:08:45.781902 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-26 06:08:45.781948 | orchestrator | 2026-03-26 06:08:45.781982 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-26 06:08:45.781994 | orchestrator | Thursday 26 March 2026 06:08:22 +0000 (0:00:01.229) 1:05:46.133 ******** 2026-03-26 06:08:45.782082 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-26 06:08:45.782096 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-26 06:08:45.782107 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-26 06:08:45.782118 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-26 06:08:45.782129 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-03-26 06:08:45.782139 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-26 06:08:45.782161 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-26 06:08:45.782172 | orchestrator | 2026-03-26 06:08:45.782182 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-26 06:08:45.782193 | orchestrator | Thursday 26 March 2026 06:08:24 +0000 (0:00:01.797) 1:05:47.930 ******** 2026-03-26 06:08:45.782204 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-26 06:08:45.782215 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-26 06:08:45.782225 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-26 06:08:45.782236 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-26 06:08:45.782247 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-03-26 06:08:45.782257 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-26 06:08:45.782268 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-26 06:08:45.782279 | orchestrator | 2026-03-26 06:08:45.782289 | orchestrator | TASK [Stop ceph rgw when upgrading from stable-3.2] **************************** 2026-03-26 06:08:45.782300 | orchestrator | Thursday 26 March 2026 06:08:26 +0000 (0:00:02.534) 1:05:50.464 ******** 2026-03-26 06:08:45.782311 | orchestrator | changed: [testbed-node-4] 2026-03-26 06:08:45.782321 | orchestrator | 2026-03-26 06:08:45.782332 | orchestrator | TASK [Stop ceph rgw (pt. 1)] *************************************************** 2026-03-26 06:08:45.782342 | orchestrator | Thursday 26 March 2026 06:08:28 +0000 (0:00:01.943) 1:05:52.408 ******** 2026-03-26 06:08:45.782353 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-26 06:08:45.782364 | orchestrator | 2026-03-26 06:08:45.782375 | orchestrator | TASK [Stop ceph rgw (pt. 2)] *************************************************** 2026-03-26 06:08:45.782385 | orchestrator | Thursday 26 March 2026 06:08:32 +0000 (0:00:03.633) 1:05:56.041 ******** 2026-03-26 06:08:45.782402 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-26 06:08:45.782413 | orchestrator | 2026-03-26 06:08:45.782424 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-26 06:08:45.782435 | orchestrator | Thursday 26 March 2026 06:08:34 +0000 (0:00:01.931) 1:05:57.973 ******** 2026-03-26 06:08:45.782446 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-4 2026-03-26 06:08:45.782456 | orchestrator | 2026-03-26 06:08:45.782467 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-26 06:08:45.782478 | orchestrator | Thursday 26 March 2026 06:08:35 +0000 (0:00:01.119) 1:05:59.093 ******** 2026-03-26 06:08:45.782489 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-4 2026-03-26 06:08:45.782499 | orchestrator | 2026-03-26 06:08:45.782510 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-26 06:08:45.782547 | orchestrator | Thursday 26 March 2026 06:08:36 +0000 (0:00:01.099) 1:06:00.192 ******** 2026-03-26 06:08:45.782559 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:08:45.782570 | orchestrator | 2026-03-26 06:08:45.782581 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-26 06:08:45.782591 | orchestrator | Thursday 26 March 2026 06:08:37 +0000 (0:00:01.128) 1:06:01.321 ******** 2026-03-26 06:08:45.782602 | orchestrator | ok: [testbed-node-4] 2026-03-26 06:08:45.782613 | orchestrator | 2026-03-26 06:08:45.782624 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-26 06:08:45.782634 | orchestrator | Thursday 26 March 2026 06:08:39 +0000 (0:00:01.588) 1:06:02.909 ******** 2026-03-26 06:08:45.782645 | orchestrator | ok: [testbed-node-4] 2026-03-26 06:08:45.782656 | orchestrator | 2026-03-26 06:08:45.782667 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-26 06:08:45.782685 | orchestrator | Thursday 26 March 2026 06:08:40 +0000 (0:00:01.522) 1:06:04.431 ******** 2026-03-26 06:08:45.782696 | orchestrator | ok: [testbed-node-4] 2026-03-26 06:08:45.782707 | orchestrator | 2026-03-26 06:08:45.782718 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-26 06:08:45.782728 | orchestrator | Thursday 26 March 2026 06:08:42 +0000 (0:00:01.521) 1:06:05.953 ******** 2026-03-26 06:08:45.782739 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:08:45.782750 | orchestrator | 2026-03-26 06:08:45.782761 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-26 06:08:45.782771 | orchestrator | Thursday 26 March 2026 06:08:43 +0000 (0:00:01.161) 1:06:07.114 ******** 2026-03-26 06:08:45.782782 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:08:45.782793 | orchestrator | 2026-03-26 06:08:45.782804 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-26 06:08:45.782815 | orchestrator | Thursday 26 March 2026 06:08:44 +0000 (0:00:01.145) 1:06:08.260 ******** 2026-03-26 06:08:45.782826 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:08:45.782837 | orchestrator | 2026-03-26 06:08:45.782848 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-26 06:08:45.782866 | orchestrator | Thursday 26 March 2026 06:08:45 +0000 (0:00:01.167) 1:06:09.427 ******** 2026-03-26 06:09:26.666700 | orchestrator | ok: [testbed-node-4] 2026-03-26 06:09:26.666884 | orchestrator | 2026-03-26 06:09:26.666914 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-26 06:09:26.666937 | orchestrator | Thursday 26 March 2026 06:08:47 +0000 (0:00:01.557) 1:06:10.985 ******** 2026-03-26 06:09:26.666955 | orchestrator | ok: [testbed-node-4] 2026-03-26 06:09:26.666974 | orchestrator | 2026-03-26 06:09:26.666993 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-26 06:09:26.667014 | orchestrator | Thursday 26 March 2026 06:08:48 +0000 (0:00:01.513) 1:06:12.499 ******** 2026-03-26 06:09:26.667033 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:09:26.667051 | orchestrator | 2026-03-26 06:09:26.667071 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-26 06:09:26.667089 | orchestrator | Thursday 26 March 2026 06:08:49 +0000 (0:00:00.778) 1:06:13.277 ******** 2026-03-26 06:09:26.667107 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:09:26.667127 | orchestrator | 2026-03-26 06:09:26.667147 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-26 06:09:26.667166 | orchestrator | Thursday 26 March 2026 06:08:50 +0000 (0:00:00.806) 1:06:14.083 ******** 2026-03-26 06:09:26.667186 | orchestrator | ok: [testbed-node-4] 2026-03-26 06:09:26.667205 | orchestrator | 2026-03-26 06:09:26.667221 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-26 06:09:26.667235 | orchestrator | Thursday 26 March 2026 06:08:51 +0000 (0:00:00.838) 1:06:14.922 ******** 2026-03-26 06:09:26.667247 | orchestrator | ok: [testbed-node-4] 2026-03-26 06:09:26.667258 | orchestrator | 2026-03-26 06:09:26.667269 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-26 06:09:26.667280 | orchestrator | Thursday 26 March 2026 06:08:52 +0000 (0:00:00.832) 1:06:15.754 ******** 2026-03-26 06:09:26.667298 | orchestrator | ok: [testbed-node-4] 2026-03-26 06:09:26.667317 | orchestrator | 2026-03-26 06:09:26.667335 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-26 06:09:26.667352 | orchestrator | Thursday 26 March 2026 06:08:52 +0000 (0:00:00.845) 1:06:16.600 ******** 2026-03-26 06:09:26.667370 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:09:26.667389 | orchestrator | 2026-03-26 06:09:26.667407 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-26 06:09:26.667426 | orchestrator | Thursday 26 March 2026 06:08:53 +0000 (0:00:00.846) 1:06:17.446 ******** 2026-03-26 06:09:26.667446 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:09:26.667464 | orchestrator | 2026-03-26 06:09:26.667483 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-26 06:09:26.667592 | orchestrator | Thursday 26 March 2026 06:08:54 +0000 (0:00:00.815) 1:06:18.262 ******** 2026-03-26 06:09:26.667615 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:09:26.667635 | orchestrator | 2026-03-26 06:09:26.667655 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-26 06:09:26.667676 | orchestrator | Thursday 26 March 2026 06:08:55 +0000 (0:00:00.773) 1:06:19.035 ******** 2026-03-26 06:09:26.667698 | orchestrator | ok: [testbed-node-4] 2026-03-26 06:09:26.667717 | orchestrator | 2026-03-26 06:09:26.667738 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-26 06:09:26.667758 | orchestrator | Thursday 26 March 2026 06:08:56 +0000 (0:00:00.814) 1:06:19.850 ******** 2026-03-26 06:09:26.667779 | orchestrator | ok: [testbed-node-4] 2026-03-26 06:09:26.667799 | orchestrator | 2026-03-26 06:09:26.667820 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-03-26 06:09:26.667842 | orchestrator | Thursday 26 March 2026 06:08:56 +0000 (0:00:00.805) 1:06:20.656 ******** 2026-03-26 06:09:26.667861 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:09:26.667880 | orchestrator | 2026-03-26 06:09:26.667900 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-03-26 06:09:26.667920 | orchestrator | Thursday 26 March 2026 06:08:57 +0000 (0:00:00.758) 1:06:21.415 ******** 2026-03-26 06:09:26.667941 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:09:26.667961 | orchestrator | 2026-03-26 06:09:26.667980 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-03-26 06:09:26.667999 | orchestrator | Thursday 26 March 2026 06:08:58 +0000 (0:00:00.853) 1:06:22.268 ******** 2026-03-26 06:09:26.668018 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:09:26.668038 | orchestrator | 2026-03-26 06:09:26.668059 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-03-26 06:09:26.668079 | orchestrator | Thursday 26 March 2026 06:08:59 +0000 (0:00:00.786) 1:06:23.054 ******** 2026-03-26 06:09:26.668098 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:09:26.668118 | orchestrator | 2026-03-26 06:09:26.668138 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-03-26 06:09:26.668159 | orchestrator | Thursday 26 March 2026 06:09:00 +0000 (0:00:00.765) 1:06:23.820 ******** 2026-03-26 06:09:26.668179 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:09:26.668200 | orchestrator | 2026-03-26 06:09:26.668219 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-03-26 06:09:26.668238 | orchestrator | Thursday 26 March 2026 06:09:00 +0000 (0:00:00.764) 1:06:24.585 ******** 2026-03-26 06:09:26.668258 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:09:26.668278 | orchestrator | 2026-03-26 06:09:26.668299 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-03-26 06:09:26.668319 | orchestrator | Thursday 26 March 2026 06:09:01 +0000 (0:00:00.809) 1:06:25.395 ******** 2026-03-26 06:09:26.668338 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:09:26.668358 | orchestrator | 2026-03-26 06:09:26.668378 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-03-26 06:09:26.668469 | orchestrator | Thursday 26 March 2026 06:09:02 +0000 (0:00:00.805) 1:06:26.200 ******** 2026-03-26 06:09:26.668484 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:09:26.668495 | orchestrator | 2026-03-26 06:09:26.668506 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-03-26 06:09:26.668544 | orchestrator | Thursday 26 March 2026 06:09:03 +0000 (0:00:00.864) 1:06:27.065 ******** 2026-03-26 06:09:26.668560 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:09:26.668570 | orchestrator | 2026-03-26 06:09:26.668605 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-03-26 06:09:26.668617 | orchestrator | Thursday 26 March 2026 06:09:04 +0000 (0:00:00.840) 1:06:27.905 ******** 2026-03-26 06:09:26.668627 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:09:26.668638 | orchestrator | 2026-03-26 06:09:26.668649 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-03-26 06:09:26.668671 | orchestrator | Thursday 26 March 2026 06:09:05 +0000 (0:00:00.811) 1:06:28.717 ******** 2026-03-26 06:09:26.668682 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:09:26.668693 | orchestrator | 2026-03-26 06:09:26.668703 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-03-26 06:09:26.668713 | orchestrator | Thursday 26 March 2026 06:09:05 +0000 (0:00:00.803) 1:06:29.520 ******** 2026-03-26 06:09:26.668724 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:09:26.668734 | orchestrator | 2026-03-26 06:09:26.668745 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-26 06:09:26.668756 | orchestrator | Thursday 26 March 2026 06:09:06 +0000 (0:00:00.774) 1:06:30.295 ******** 2026-03-26 06:09:26.668766 | orchestrator | ok: [testbed-node-4] 2026-03-26 06:09:26.668777 | orchestrator | 2026-03-26 06:09:26.668787 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-26 06:09:26.668798 | orchestrator | Thursday 26 March 2026 06:09:08 +0000 (0:00:01.596) 1:06:31.892 ******** 2026-03-26 06:09:26.668808 | orchestrator | ok: [testbed-node-4] 2026-03-26 06:09:26.668819 | orchestrator | 2026-03-26 06:09:26.668829 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-26 06:09:26.668841 | orchestrator | Thursday 26 March 2026 06:09:10 +0000 (0:00:01.890) 1:06:33.782 ******** 2026-03-26 06:09:26.668859 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-4 2026-03-26 06:09:26.668878 | orchestrator | 2026-03-26 06:09:26.668895 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-26 06:09:26.668915 | orchestrator | Thursday 26 March 2026 06:09:11 +0000 (0:00:01.174) 1:06:34.956 ******** 2026-03-26 06:09:26.668932 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:09:26.668950 | orchestrator | 2026-03-26 06:09:26.668961 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-26 06:09:26.668971 | orchestrator | Thursday 26 March 2026 06:09:12 +0000 (0:00:01.164) 1:06:36.121 ******** 2026-03-26 06:09:26.668982 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:09:26.668992 | orchestrator | 2026-03-26 06:09:26.669003 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-26 06:09:26.669014 | orchestrator | Thursday 26 March 2026 06:09:13 +0000 (0:00:01.144) 1:06:37.265 ******** 2026-03-26 06:09:26.669024 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-26 06:09:26.669035 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-26 06:09:26.669045 | orchestrator | 2026-03-26 06:09:26.669063 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-26 06:09:26.669074 | orchestrator | Thursday 26 March 2026 06:09:15 +0000 (0:00:01.872) 1:06:39.138 ******** 2026-03-26 06:09:26.669084 | orchestrator | ok: [testbed-node-4] 2026-03-26 06:09:26.669095 | orchestrator | 2026-03-26 06:09:26.669105 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-26 06:09:26.669116 | orchestrator | Thursday 26 March 2026 06:09:16 +0000 (0:00:01.439) 1:06:40.578 ******** 2026-03-26 06:09:26.669126 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:09:26.669137 | orchestrator | 2026-03-26 06:09:26.669147 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-26 06:09:26.669157 | orchestrator | Thursday 26 March 2026 06:09:18 +0000 (0:00:01.715) 1:06:42.294 ******** 2026-03-26 06:09:26.669168 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:09:26.669178 | orchestrator | 2026-03-26 06:09:26.669189 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-26 06:09:26.669199 | orchestrator | Thursday 26 March 2026 06:09:19 +0000 (0:00:00.823) 1:06:43.117 ******** 2026-03-26 06:09:26.669210 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:09:26.669221 | orchestrator | 2026-03-26 06:09:26.669231 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-26 06:09:26.669251 | orchestrator | Thursday 26 March 2026 06:09:20 +0000 (0:00:00.793) 1:06:43.910 ******** 2026-03-26 06:09:26.669261 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-4 2026-03-26 06:09:26.669272 | orchestrator | 2026-03-26 06:09:26.669282 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-26 06:09:26.669293 | orchestrator | Thursday 26 March 2026 06:09:21 +0000 (0:00:01.138) 1:06:45.049 ******** 2026-03-26 06:09:26.669303 | orchestrator | ok: [testbed-node-4] 2026-03-26 06:09:26.669314 | orchestrator | 2026-03-26 06:09:26.669328 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-26 06:09:26.669346 | orchestrator | Thursday 26 March 2026 06:09:23 +0000 (0:00:01.847) 1:06:46.897 ******** 2026-03-26 06:09:26.669374 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-26 06:09:26.669394 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-26 06:09:26.669411 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-26 06:09:26.669429 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:09:26.669449 | orchestrator | 2026-03-26 06:09:26.669468 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-26 06:09:26.669486 | orchestrator | Thursday 26 March 2026 06:09:24 +0000 (0:00:01.138) 1:06:48.035 ******** 2026-03-26 06:09:26.669512 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:09:26.669561 | orchestrator | 2026-03-26 06:09:26.669579 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-26 06:09:26.669597 | orchestrator | Thursday 26 March 2026 06:09:25 +0000 (0:00:01.135) 1:06:49.171 ******** 2026-03-26 06:09:26.669615 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:09:26.669634 | orchestrator | 2026-03-26 06:09:26.669668 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-26 06:10:09.564841 | orchestrator | Thursday 26 March 2026 06:09:26 +0000 (0:00:01.141) 1:06:50.313 ******** 2026-03-26 06:10:09.564941 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:10:09.564959 | orchestrator | 2026-03-26 06:10:09.564971 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-26 06:10:09.564983 | orchestrator | Thursday 26 March 2026 06:09:27 +0000 (0:00:01.187) 1:06:51.501 ******** 2026-03-26 06:10:09.564994 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:10:09.565005 | orchestrator | 2026-03-26 06:10:09.565016 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-26 06:10:09.565027 | orchestrator | Thursday 26 March 2026 06:09:29 +0000 (0:00:01.179) 1:06:52.681 ******** 2026-03-26 06:10:09.565038 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:10:09.565049 | orchestrator | 2026-03-26 06:10:09.565060 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-26 06:10:09.565071 | orchestrator | Thursday 26 March 2026 06:09:29 +0000 (0:00:00.807) 1:06:53.488 ******** 2026-03-26 06:10:09.565081 | orchestrator | ok: [testbed-node-4] 2026-03-26 06:10:09.565093 | orchestrator | 2026-03-26 06:10:09.565104 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-26 06:10:09.565115 | orchestrator | Thursday 26 March 2026 06:09:31 +0000 (0:00:02.130) 1:06:55.619 ******** 2026-03-26 06:10:09.565126 | orchestrator | ok: [testbed-node-4] 2026-03-26 06:10:09.565137 | orchestrator | 2026-03-26 06:10:09.565148 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-26 06:10:09.565159 | orchestrator | Thursday 26 March 2026 06:09:32 +0000 (0:00:00.856) 1:06:56.475 ******** 2026-03-26 06:10:09.565169 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-4 2026-03-26 06:10:09.565180 | orchestrator | 2026-03-26 06:10:09.565191 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-26 06:10:09.565202 | orchestrator | Thursday 26 March 2026 06:09:33 +0000 (0:00:01.162) 1:06:57.637 ******** 2026-03-26 06:10:09.565213 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:10:09.565241 | orchestrator | 2026-03-26 06:10:09.565252 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-26 06:10:09.565263 | orchestrator | Thursday 26 March 2026 06:09:35 +0000 (0:00:01.131) 1:06:58.769 ******** 2026-03-26 06:10:09.565274 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:10:09.565284 | orchestrator | 2026-03-26 06:10:09.565295 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-26 06:10:09.565306 | orchestrator | Thursday 26 March 2026 06:09:36 +0000 (0:00:01.155) 1:06:59.924 ******** 2026-03-26 06:10:09.565317 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:10:09.565327 | orchestrator | 2026-03-26 06:10:09.565338 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-26 06:10:09.565349 | orchestrator | Thursday 26 March 2026 06:09:37 +0000 (0:00:01.174) 1:07:01.099 ******** 2026-03-26 06:10:09.565366 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:10:09.565377 | orchestrator | 2026-03-26 06:10:09.565388 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-26 06:10:09.565399 | orchestrator | Thursday 26 March 2026 06:09:38 +0000 (0:00:01.181) 1:07:02.281 ******** 2026-03-26 06:10:09.565412 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:10:09.565425 | orchestrator | 2026-03-26 06:10:09.565437 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-26 06:10:09.565449 | orchestrator | Thursday 26 March 2026 06:09:39 +0000 (0:00:01.119) 1:07:03.400 ******** 2026-03-26 06:10:09.565462 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:10:09.565474 | orchestrator | 2026-03-26 06:10:09.565486 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-26 06:10:09.565499 | orchestrator | Thursday 26 March 2026 06:09:40 +0000 (0:00:01.223) 1:07:04.624 ******** 2026-03-26 06:10:09.565553 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:10:09.565574 | orchestrator | 2026-03-26 06:10:09.565594 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-26 06:10:09.565614 | orchestrator | Thursday 26 March 2026 06:09:42 +0000 (0:00:01.161) 1:07:05.786 ******** 2026-03-26 06:10:09.565633 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:10:09.565649 | orchestrator | 2026-03-26 06:10:09.565659 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-26 06:10:09.565671 | orchestrator | Thursday 26 March 2026 06:09:43 +0000 (0:00:01.113) 1:07:06.900 ******** 2026-03-26 06:10:09.565681 | orchestrator | ok: [testbed-node-4] 2026-03-26 06:10:09.565692 | orchestrator | 2026-03-26 06:10:09.565702 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-26 06:10:09.565713 | orchestrator | Thursday 26 March 2026 06:09:44 +0000 (0:00:00.829) 1:07:07.730 ******** 2026-03-26 06:10:09.565724 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-4 2026-03-26 06:10:09.565735 | orchestrator | 2026-03-26 06:10:09.565745 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-26 06:10:09.565756 | orchestrator | Thursday 26 March 2026 06:09:45 +0000 (0:00:01.306) 1:07:09.036 ******** 2026-03-26 06:10:09.565767 | orchestrator | ok: [testbed-node-4] => (item=/etc/ceph) 2026-03-26 06:10:09.565777 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/) 2026-03-26 06:10:09.565788 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-03-26 06:10:09.565799 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-03-26 06:10:09.565809 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-03-26 06:10:09.565820 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-03-26 06:10:09.565830 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-03-26 06:10:09.565841 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-03-26 06:10:09.565852 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-26 06:10:09.565862 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-26 06:10:09.565873 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-26 06:10:09.565909 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-26 06:10:09.565921 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-26 06:10:09.565932 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-26 06:10:09.565942 | orchestrator | ok: [testbed-node-4] => (item=/var/run/ceph) 2026-03-26 06:10:09.565953 | orchestrator | ok: [testbed-node-4] => (item=/var/log/ceph) 2026-03-26 06:10:09.565963 | orchestrator | 2026-03-26 06:10:09.565974 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-26 06:10:09.565985 | orchestrator | Thursday 26 March 2026 06:09:51 +0000 (0:00:06.102) 1:07:15.139 ******** 2026-03-26 06:10:09.565995 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-4 2026-03-26 06:10:09.566006 | orchestrator | 2026-03-26 06:10:09.566086 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-03-26 06:10:09.566100 | orchestrator | Thursday 26 March 2026 06:09:52 +0000 (0:00:01.117) 1:07:16.256 ******** 2026-03-26 06:10:09.566111 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-26 06:10:09.566123 | orchestrator | 2026-03-26 06:10:09.566134 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-03-26 06:10:09.566153 | orchestrator | Thursday 26 March 2026 06:09:54 +0000 (0:00:01.529) 1:07:17.786 ******** 2026-03-26 06:10:09.566164 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-26 06:10:09.566175 | orchestrator | 2026-03-26 06:10:09.566186 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-26 06:10:09.566196 | orchestrator | Thursday 26 March 2026 06:09:55 +0000 (0:00:01.707) 1:07:19.494 ******** 2026-03-26 06:10:09.566207 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:10:09.566217 | orchestrator | 2026-03-26 06:10:09.566228 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-26 06:10:09.566239 | orchestrator | Thursday 26 March 2026 06:09:56 +0000 (0:00:00.819) 1:07:20.314 ******** 2026-03-26 06:10:09.566249 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:10:09.566273 | orchestrator | 2026-03-26 06:10:09.566293 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-26 06:10:09.566304 | orchestrator | Thursday 26 March 2026 06:09:57 +0000 (0:00:00.766) 1:07:21.080 ******** 2026-03-26 06:10:09.566315 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:10:09.566326 | orchestrator | 2026-03-26 06:10:09.566336 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-26 06:10:09.566353 | orchestrator | Thursday 26 March 2026 06:09:58 +0000 (0:00:00.771) 1:07:21.851 ******** 2026-03-26 06:10:09.566364 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:10:09.566374 | orchestrator | 2026-03-26 06:10:09.566385 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-26 06:10:09.566396 | orchestrator | Thursday 26 March 2026 06:09:59 +0000 (0:00:00.806) 1:07:22.658 ******** 2026-03-26 06:10:09.566406 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:10:09.566417 | orchestrator | 2026-03-26 06:10:09.566427 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-26 06:10:09.566438 | orchestrator | Thursday 26 March 2026 06:09:59 +0000 (0:00:00.803) 1:07:23.461 ******** 2026-03-26 06:10:09.566449 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:10:09.566459 | orchestrator | 2026-03-26 06:10:09.566470 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-26 06:10:09.566480 | orchestrator | Thursday 26 March 2026 06:10:00 +0000 (0:00:00.810) 1:07:24.272 ******** 2026-03-26 06:10:09.566491 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:10:09.566501 | orchestrator | 2026-03-26 06:10:09.566527 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-26 06:10:09.566549 | orchestrator | Thursday 26 March 2026 06:10:01 +0000 (0:00:00.829) 1:07:25.102 ******** 2026-03-26 06:10:09.566560 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:10:09.566571 | orchestrator | 2026-03-26 06:10:09.566582 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-26 06:10:09.566592 | orchestrator | Thursday 26 March 2026 06:10:02 +0000 (0:00:00.815) 1:07:25.917 ******** 2026-03-26 06:10:09.566603 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:10:09.566613 | orchestrator | 2026-03-26 06:10:09.566624 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-26 06:10:09.566635 | orchestrator | Thursday 26 March 2026 06:10:03 +0000 (0:00:00.764) 1:07:26.682 ******** 2026-03-26 06:10:09.566645 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:10:09.566656 | orchestrator | 2026-03-26 06:10:09.566666 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-26 06:10:09.566677 | orchestrator | Thursday 26 March 2026 06:10:03 +0000 (0:00:00.806) 1:07:27.490 ******** 2026-03-26 06:10:09.566688 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:10:09.566699 | orchestrator | 2026-03-26 06:10:09.566709 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-26 06:10:09.566720 | orchestrator | Thursday 26 March 2026 06:10:04 +0000 (0:00:00.810) 1:07:28.300 ******** 2026-03-26 06:10:09.566731 | orchestrator | changed: [testbed-node-4 -> testbed-node-2(192.168.16.12)] 2026-03-26 06:10:09.566741 | orchestrator | 2026-03-26 06:10:09.566752 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-26 06:10:09.566763 | orchestrator | Thursday 26 March 2026 06:10:08 +0000 (0:00:04.119) 1:07:32.420 ******** 2026-03-26 06:10:09.566773 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-26 06:10:09.566784 | orchestrator | 2026-03-26 06:10:09.566803 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-26 06:10:50.400750 | orchestrator | Thursday 26 March 2026 06:10:09 +0000 (0:00:00.795) 1:07:33.216 ******** 2026-03-26 06:10:50.400865 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}]) 2026-03-26 06:10:50.400883 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}]) 2026-03-26 06:10:50.400895 | orchestrator | 2026-03-26 06:10:50.400906 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-26 06:10:50.400916 | orchestrator | Thursday 26 March 2026 06:10:13 +0000 (0:00:04.429) 1:07:37.645 ******** 2026-03-26 06:10:50.400926 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:10:50.400936 | orchestrator | 2026-03-26 06:10:50.400946 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-26 06:10:50.400956 | orchestrator | Thursday 26 March 2026 06:10:14 +0000 (0:00:00.807) 1:07:38.453 ******** 2026-03-26 06:10:50.400965 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:10:50.400975 | orchestrator | 2026-03-26 06:10:50.400985 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-26 06:10:50.400996 | orchestrator | Thursday 26 March 2026 06:10:15 +0000 (0:00:00.793) 1:07:39.246 ******** 2026-03-26 06:10:50.401006 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:10:50.401015 | orchestrator | 2026-03-26 06:10:50.401025 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-26 06:10:50.401058 | orchestrator | Thursday 26 March 2026 06:10:16 +0000 (0:00:00.817) 1:07:40.064 ******** 2026-03-26 06:10:50.401068 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:10:50.401077 | orchestrator | 2026-03-26 06:10:50.401087 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-26 06:10:50.401096 | orchestrator | Thursday 26 March 2026 06:10:17 +0000 (0:00:00.815) 1:07:40.880 ******** 2026-03-26 06:10:50.401105 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:10:50.401115 | orchestrator | 2026-03-26 06:10:50.401124 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-26 06:10:50.401135 | orchestrator | Thursday 26 March 2026 06:10:18 +0000 (0:00:00.821) 1:07:41.701 ******** 2026-03-26 06:10:50.401145 | orchestrator | ok: [testbed-node-4] 2026-03-26 06:10:50.401155 | orchestrator | 2026-03-26 06:10:50.401165 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-26 06:10:50.401174 | orchestrator | Thursday 26 March 2026 06:10:19 +0000 (0:00:01.058) 1:07:42.759 ******** 2026-03-26 06:10:50.401184 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-26 06:10:50.401194 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-26 06:10:50.401203 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-26 06:10:50.401212 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:10:50.401222 | orchestrator | 2026-03-26 06:10:50.401231 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-26 06:10:50.401241 | orchestrator | Thursday 26 March 2026 06:10:20 +0000 (0:00:01.076) 1:07:43.836 ******** 2026-03-26 06:10:50.401250 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-26 06:10:50.401259 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-26 06:10:50.401269 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-26 06:10:50.401278 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:10:50.401287 | orchestrator | 2026-03-26 06:10:50.401298 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-26 06:10:50.401310 | orchestrator | Thursday 26 March 2026 06:10:21 +0000 (0:00:01.131) 1:07:44.967 ******** 2026-03-26 06:10:50.401321 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-26 06:10:50.401331 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-26 06:10:50.401342 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-26 06:10:50.401352 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:10:50.401364 | orchestrator | 2026-03-26 06:10:50.401374 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-26 06:10:50.401385 | orchestrator | Thursday 26 March 2026 06:10:22 +0000 (0:00:01.087) 1:07:46.055 ******** 2026-03-26 06:10:50.401395 | orchestrator | ok: [testbed-node-4] 2026-03-26 06:10:50.401406 | orchestrator | 2026-03-26 06:10:50.401417 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-26 06:10:50.401427 | orchestrator | Thursday 26 March 2026 06:10:23 +0000 (0:00:00.843) 1:07:46.899 ******** 2026-03-26 06:10:50.401438 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-26 06:10:50.401449 | orchestrator | 2026-03-26 06:10:50.401460 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-26 06:10:50.401471 | orchestrator | Thursday 26 March 2026 06:10:24 +0000 (0:00:01.042) 1:07:47.942 ******** 2026-03-26 06:10:50.401481 | orchestrator | ok: [testbed-node-4] 2026-03-26 06:10:50.401492 | orchestrator | 2026-03-26 06:10:50.401503 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-03-26 06:10:50.401538 | orchestrator | Thursday 26 March 2026 06:10:25 +0000 (0:00:01.435) 1:07:49.378 ******** 2026-03-26 06:10:50.401549 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-4 2026-03-26 06:10:50.401560 | orchestrator | 2026-03-26 06:10:50.401586 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-03-26 06:10:50.401606 | orchestrator | Thursday 26 March 2026 06:10:26 +0000 (0:00:01.141) 1:07:50.519 ******** 2026-03-26 06:10:50.401617 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-26 06:10:50.401628 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-26 06:10:50.401639 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-26 06:10:50.401650 | orchestrator | 2026-03-26 06:10:50.401659 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-03-26 06:10:50.401669 | orchestrator | Thursday 26 March 2026 06:10:30 +0000 (0:00:03.219) 1:07:53.739 ******** 2026-03-26 06:10:50.401678 | orchestrator | ok: [testbed-node-4] => (item=None) 2026-03-26 06:10:50.401688 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-26 06:10:50.401697 | orchestrator | ok: [testbed-node-4] 2026-03-26 06:10:50.401706 | orchestrator | 2026-03-26 06:10:50.401716 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-03-26 06:10:50.401726 | orchestrator | Thursday 26 March 2026 06:10:32 +0000 (0:00:01.950) 1:07:55.689 ******** 2026-03-26 06:10:50.401735 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:10:50.401745 | orchestrator | 2026-03-26 06:10:50.401754 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-03-26 06:10:50.401764 | orchestrator | Thursday 26 March 2026 06:10:32 +0000 (0:00:00.749) 1:07:56.438 ******** 2026-03-26 06:10:50.401773 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-4 2026-03-26 06:10:50.401783 | orchestrator | 2026-03-26 06:10:50.401792 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-03-26 06:10:50.401802 | orchestrator | Thursday 26 March 2026 06:10:34 +0000 (0:00:01.315) 1:07:57.754 ******** 2026-03-26 06:10:50.401811 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-26 06:10:50.401822 | orchestrator | 2026-03-26 06:10:50.401831 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-03-26 06:10:50.401841 | orchestrator | Thursday 26 March 2026 06:10:35 +0000 (0:00:01.590) 1:07:59.345 ******** 2026-03-26 06:10:50.401850 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-26 06:10:50.401860 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-26 06:10:50.401869 | orchestrator | 2026-03-26 06:10:50.401879 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-03-26 06:10:50.401888 | orchestrator | Thursday 26 March 2026 06:10:40 +0000 (0:00:05.213) 1:08:04.559 ******** 2026-03-26 06:10:50.401898 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-26 06:10:50.401907 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-26 06:10:50.401916 | orchestrator | 2026-03-26 06:10:50.401926 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-03-26 06:10:50.401935 | orchestrator | Thursday 26 March 2026 06:10:43 +0000 (0:00:03.044) 1:08:07.604 ******** 2026-03-26 06:10:50.401945 | orchestrator | ok: [testbed-node-4] => (item=None) 2026-03-26 06:10:50.401954 | orchestrator | ok: [testbed-node-4] 2026-03-26 06:10:50.401964 | orchestrator | 2026-03-26 06:10:50.401973 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-03-26 06:10:50.401982 | orchestrator | Thursday 26 March 2026 06:10:45 +0000 (0:00:01.639) 1:08:09.244 ******** 2026-03-26 06:10:50.401992 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-4 2026-03-26 06:10:50.402001 | orchestrator | 2026-03-26 06:10:50.402011 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-03-26 06:10:50.402078 | orchestrator | Thursday 26 March 2026 06:10:46 +0000 (0:00:01.160) 1:08:10.404 ******** 2026-03-26 06:10:50.402088 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-26 06:10:50.402105 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-26 06:10:50.402115 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-26 06:10:50.402124 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-26 06:10:50.402134 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-26 06:10:50.402144 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:10:50.402153 | orchestrator | 2026-03-26 06:10:50.402163 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-03-26 06:10:50.402172 | orchestrator | Thursday 26 March 2026 06:10:48 +0000 (0:00:01.647) 1:08:12.052 ******** 2026-03-26 06:10:50.402182 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-26 06:10:50.402191 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-26 06:10:50.402201 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-26 06:10:50.402217 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-26 06:11:57.056363 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-26 06:11:57.056618 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:11:57.056652 | orchestrator | 2026-03-26 06:11:57.056673 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-03-26 06:11:57.056695 | orchestrator | Thursday 26 March 2026 06:10:50 +0000 (0:00:01.995) 1:08:14.047 ******** 2026-03-26 06:11:57.056716 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-26 06:11:57.056737 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-26 06:11:57.056758 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-26 06:11:57.056778 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-26 06:11:57.056798 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-26 06:11:57.056819 | orchestrator | 2026-03-26 06:11:57.056839 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-03-26 06:11:57.056864 | orchestrator | Thursday 26 March 2026 06:11:21 +0000 (0:00:31.450) 1:08:45.498 ******** 2026-03-26 06:11:57.056886 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:11:57.056908 | orchestrator | 2026-03-26 06:11:57.056930 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-03-26 06:11:57.056950 | orchestrator | Thursday 26 March 2026 06:11:22 +0000 (0:00:00.762) 1:08:46.260 ******** 2026-03-26 06:11:57.056972 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:11:57.056993 | orchestrator | 2026-03-26 06:11:57.057016 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-03-26 06:11:57.057039 | orchestrator | Thursday 26 March 2026 06:11:23 +0000 (0:00:00.772) 1:08:47.032 ******** 2026-03-26 06:11:57.057061 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-4 2026-03-26 06:11:57.057120 | orchestrator | 2026-03-26 06:11:57.057143 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-03-26 06:11:57.057164 | orchestrator | Thursday 26 March 2026 06:11:24 +0000 (0:00:01.262) 1:08:48.295 ******** 2026-03-26 06:11:57.057182 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-4 2026-03-26 06:11:57.057201 | orchestrator | 2026-03-26 06:11:57.057220 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-03-26 06:11:57.057238 | orchestrator | Thursday 26 March 2026 06:11:25 +0000 (0:00:01.163) 1:08:49.458 ******** 2026-03-26 06:11:57.057257 | orchestrator | ok: [testbed-node-4] 2026-03-26 06:11:57.057277 | orchestrator | 2026-03-26 06:11:57.057296 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-03-26 06:11:57.057314 | orchestrator | Thursday 26 March 2026 06:11:27 +0000 (0:00:02.068) 1:08:51.526 ******** 2026-03-26 06:11:57.057332 | orchestrator | ok: [testbed-node-4] 2026-03-26 06:11:57.057349 | orchestrator | 2026-03-26 06:11:57.057368 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-03-26 06:11:57.057387 | orchestrator | Thursday 26 March 2026 06:11:29 +0000 (0:00:01.867) 1:08:53.394 ******** 2026-03-26 06:11:57.057405 | orchestrator | ok: [testbed-node-4] 2026-03-26 06:11:57.057424 | orchestrator | 2026-03-26 06:11:57.057442 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-03-26 06:11:57.057462 | orchestrator | Thursday 26 March 2026 06:11:31 +0000 (0:00:02.209) 1:08:55.603 ******** 2026-03-26 06:11:57.057482 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-26 06:11:57.057532 | orchestrator | 2026-03-26 06:11:57.057552 | orchestrator | PLAY [Upgrade ceph rgws cluster] *********************************************** 2026-03-26 06:11:57.057571 | orchestrator | 2026-03-26 06:11:57.057588 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-26 06:11:57.057606 | orchestrator | Thursday 26 March 2026 06:11:34 +0000 (0:00:02.821) 1:08:58.425 ******** 2026-03-26 06:11:57.057624 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-5 2026-03-26 06:11:57.057643 | orchestrator | 2026-03-26 06:11:57.057661 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-26 06:11:57.057679 | orchestrator | Thursday 26 March 2026 06:11:35 +0000 (0:00:01.179) 1:08:59.604 ******** 2026-03-26 06:11:57.057697 | orchestrator | ok: [testbed-node-5] 2026-03-26 06:11:57.057708 | orchestrator | 2026-03-26 06:11:57.057719 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-26 06:11:57.057729 | orchestrator | Thursday 26 March 2026 06:11:37 +0000 (0:00:01.538) 1:09:01.143 ******** 2026-03-26 06:11:57.057740 | orchestrator | ok: [testbed-node-5] 2026-03-26 06:11:57.057750 | orchestrator | 2026-03-26 06:11:57.057761 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-26 06:11:57.057771 | orchestrator | Thursday 26 March 2026 06:11:38 +0000 (0:00:01.152) 1:09:02.295 ******** 2026-03-26 06:11:57.057782 | orchestrator | ok: [testbed-node-5] 2026-03-26 06:11:57.057792 | orchestrator | 2026-03-26 06:11:57.057803 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-26 06:11:57.057813 | orchestrator | Thursday 26 March 2026 06:11:40 +0000 (0:00:01.490) 1:09:03.786 ******** 2026-03-26 06:11:57.057824 | orchestrator | ok: [testbed-node-5] 2026-03-26 06:11:57.057834 | orchestrator | 2026-03-26 06:11:57.057868 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-26 06:11:57.057880 | orchestrator | Thursday 26 March 2026 06:11:41 +0000 (0:00:01.188) 1:09:04.974 ******** 2026-03-26 06:11:57.057891 | orchestrator | ok: [testbed-node-5] 2026-03-26 06:11:57.057902 | orchestrator | 2026-03-26 06:11:57.057912 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-26 06:11:57.057924 | orchestrator | Thursday 26 March 2026 06:11:42 +0000 (0:00:01.179) 1:09:06.154 ******** 2026-03-26 06:11:57.057952 | orchestrator | ok: [testbed-node-5] 2026-03-26 06:11:57.057963 | orchestrator | 2026-03-26 06:11:57.057973 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-26 06:11:57.057984 | orchestrator | Thursday 26 March 2026 06:11:43 +0000 (0:00:01.194) 1:09:07.348 ******** 2026-03-26 06:11:57.057995 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:11:57.058006 | orchestrator | 2026-03-26 06:11:57.058077 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-26 06:11:57.058091 | orchestrator | Thursday 26 March 2026 06:11:44 +0000 (0:00:01.137) 1:09:08.486 ******** 2026-03-26 06:11:57.058102 | orchestrator | ok: [testbed-node-5] 2026-03-26 06:11:57.058113 | orchestrator | 2026-03-26 06:11:57.058123 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-26 06:11:57.058134 | orchestrator | Thursday 26 March 2026 06:11:45 +0000 (0:00:01.143) 1:09:09.629 ******** 2026-03-26 06:11:57.058145 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-26 06:11:57.058155 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-26 06:11:57.058166 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-26 06:11:57.058176 | orchestrator | 2026-03-26 06:11:57.058187 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-26 06:11:57.058198 | orchestrator | Thursday 26 March 2026 06:11:47 +0000 (0:00:01.764) 1:09:11.394 ******** 2026-03-26 06:11:57.058208 | orchestrator | ok: [testbed-node-5] 2026-03-26 06:11:57.058219 | orchestrator | 2026-03-26 06:11:57.058229 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-26 06:11:57.058240 | orchestrator | Thursday 26 March 2026 06:11:49 +0000 (0:00:01.358) 1:09:12.752 ******** 2026-03-26 06:11:57.058250 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-26 06:11:57.058261 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-26 06:11:57.058271 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-26 06:11:57.058282 | orchestrator | 2026-03-26 06:11:57.058293 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-26 06:11:57.058303 | orchestrator | Thursday 26 March 2026 06:11:52 +0000 (0:00:03.297) 1:09:16.050 ******** 2026-03-26 06:11:57.058314 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-26 06:11:57.058324 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-26 06:11:57.058335 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-26 06:11:57.058345 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:11:57.058356 | orchestrator | 2026-03-26 06:11:57.058366 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-26 06:11:57.058377 | orchestrator | Thursday 26 March 2026 06:11:53 +0000 (0:00:01.406) 1:09:17.456 ******** 2026-03-26 06:11:57.058390 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-26 06:11:57.058404 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-26 06:11:57.058415 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-26 06:11:57.058426 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:11:57.058436 | orchestrator | 2026-03-26 06:11:57.058447 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-26 06:11:57.058465 | orchestrator | Thursday 26 March 2026 06:11:55 +0000 (0:00:02.092) 1:09:19.548 ******** 2026-03-26 06:11:57.058479 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-26 06:11:57.058531 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-26 06:12:16.139672 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-26 06:12:16.139803 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:12:16.139822 | orchestrator | 2026-03-26 06:12:16.139835 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-26 06:12:16.139848 | orchestrator | Thursday 26 March 2026 06:11:57 +0000 (0:00:01.154) 1:09:20.703 ******** 2026-03-26 06:12:16.139861 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': 'de9c3b4c4c57', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-26 06:11:49.624637', 'end': '2026-03-26 06:11:49.669646', 'delta': '0:00:00.045009', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['de9c3b4c4c57'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-26 06:12:16.139877 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': 'd66b87272f8e', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-26 06:11:50.174575', 'end': '2026-03-26 06:11:50.235008', 'delta': '0:00:00.060433', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['d66b87272f8e'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-26 06:12:16.139889 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': 'b850f8fd4697', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-26 06:11:51.182319', 'end': '2026-03-26 06:11:51.231785', 'delta': '0:00:00.049466', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['b850f8fd4697'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-26 06:12:16.139927 | orchestrator | 2026-03-26 06:12:16.139940 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-26 06:12:16.139951 | orchestrator | Thursday 26 March 2026 06:11:58 +0000 (0:00:01.315) 1:09:22.018 ******** 2026-03-26 06:12:16.139961 | orchestrator | ok: [testbed-node-5] 2026-03-26 06:12:16.139973 | orchestrator | 2026-03-26 06:12:16.139984 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-26 06:12:16.139995 | orchestrator | Thursday 26 March 2026 06:11:59 +0000 (0:00:01.280) 1:09:23.299 ******** 2026-03-26 06:12:16.140005 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:12:16.140015 | orchestrator | 2026-03-26 06:12:16.140026 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-26 06:12:16.140036 | orchestrator | Thursday 26 March 2026 06:12:00 +0000 (0:00:01.259) 1:09:24.558 ******** 2026-03-26 06:12:16.140047 | orchestrator | ok: [testbed-node-5] 2026-03-26 06:12:16.140057 | orchestrator | 2026-03-26 06:12:16.140068 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-26 06:12:16.140078 | orchestrator | Thursday 26 March 2026 06:12:02 +0000 (0:00:01.133) 1:09:25.692 ******** 2026-03-26 06:12:16.140089 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-26 06:12:16.140100 | orchestrator | 2026-03-26 06:12:16.140111 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-26 06:12:16.140121 | orchestrator | Thursday 26 March 2026 06:12:04 +0000 (0:00:02.020) 1:09:27.712 ******** 2026-03-26 06:12:16.140132 | orchestrator | ok: [testbed-node-5] 2026-03-26 06:12:16.140142 | orchestrator | 2026-03-26 06:12:16.140155 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-26 06:12:16.140167 | orchestrator | Thursday 26 March 2026 06:12:05 +0000 (0:00:01.150) 1:09:28.863 ******** 2026-03-26 06:12:16.140195 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:12:16.140208 | orchestrator | 2026-03-26 06:12:16.140220 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-26 06:12:16.140233 | orchestrator | Thursday 26 March 2026 06:12:06 +0000 (0:00:01.123) 1:09:29.987 ******** 2026-03-26 06:12:16.140245 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:12:16.140256 | orchestrator | 2026-03-26 06:12:16.140269 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-26 06:12:16.140282 | orchestrator | Thursday 26 March 2026 06:12:07 +0000 (0:00:01.236) 1:09:31.223 ******** 2026-03-26 06:12:16.140295 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:12:16.140307 | orchestrator | 2026-03-26 06:12:16.140319 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-26 06:12:16.140332 | orchestrator | Thursday 26 March 2026 06:12:08 +0000 (0:00:01.181) 1:09:32.405 ******** 2026-03-26 06:12:16.140344 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:12:16.140356 | orchestrator | 2026-03-26 06:12:16.140368 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-26 06:12:16.140380 | orchestrator | Thursday 26 March 2026 06:12:09 +0000 (0:00:01.121) 1:09:33.526 ******** 2026-03-26 06:12:16.140392 | orchestrator | ok: [testbed-node-5] 2026-03-26 06:12:16.140404 | orchestrator | 2026-03-26 06:12:16.140416 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-26 06:12:16.140428 | orchestrator | Thursday 26 March 2026 06:12:11 +0000 (0:00:01.182) 1:09:34.709 ******** 2026-03-26 06:12:16.140440 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:12:16.140452 | orchestrator | 2026-03-26 06:12:16.140464 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-26 06:12:16.140476 | orchestrator | Thursday 26 March 2026 06:12:12 +0000 (0:00:01.137) 1:09:35.846 ******** 2026-03-26 06:12:16.140489 | orchestrator | ok: [testbed-node-5] 2026-03-26 06:12:16.140520 | orchestrator | 2026-03-26 06:12:16.140532 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-26 06:12:16.140542 | orchestrator | Thursday 26 March 2026 06:12:13 +0000 (0:00:01.163) 1:09:37.009 ******** 2026-03-26 06:12:16.140561 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:12:16.140572 | orchestrator | 2026-03-26 06:12:16.140582 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-26 06:12:16.140594 | orchestrator | Thursday 26 March 2026 06:12:14 +0000 (0:00:01.302) 1:09:38.312 ******** 2026-03-26 06:12:16.140604 | orchestrator | ok: [testbed-node-5] 2026-03-26 06:12:16.140615 | orchestrator | 2026-03-26 06:12:16.140626 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-26 06:12:16.140637 | orchestrator | Thursday 26 March 2026 06:12:15 +0000 (0:00:01.202) 1:09:39.514 ******** 2026-03-26 06:12:16.140648 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 06:12:16.140661 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--1fd8de68--da37--5e01--9bf2--5a04fcdcd771-osd--block--1fd8de68--da37--5e01--9bf2--5a04fcdcd771', 'dm-uuid-LVM-Q7trkX6T9bQrenPM1EuezeEWG2QB7ffx0bNZRnQ3R81VwJTdPWktYtRAGSsXVFlp'], 'uuids': ['958c3d71-9b3b-484b-8cbf-f174ba1f6fac'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '47760649', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['0bNZRn-Q3R8-1VwJ-TdPW-ktYt-RAGS-sXVFlp']}})  2026-03-26 06:12:16.140674 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8ddd7966-84e6-4951-8a08-7b4fb4af2bd2', 'scsi-SQEMU_QEMU_HARDDISK_8ddd7966-84e6-4951-8a08-7b4fb4af2bd2'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '8ddd7966', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-26 06:12:16.140696 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-FriUOI-gUEr-kmP0-nYC7-MoO0-ng3W-Ej90o7', 'scsi-0QEMU_QEMU_HARDDISK_943c088c-5b56-4173-ab64-ec81e1cc816d', 'scsi-SQEMU_QEMU_HARDDISK_943c088c-5b56-4173-ab64-ec81e1cc816d'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '943c088c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--83c4def8--4703--5f7c--9549--7666ff9f2b66-osd--block--83c4def8--4703--5f7c--9549--7666ff9f2b66']}})  2026-03-26 06:12:17.271150 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 06:12:17.271240 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 06:12:17.271273 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-26-01-38-15-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-03-26 06:12:17.271284 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 06:12:17.271293 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-A6zrFI-A863-mhHt-Ip5p-UFeD-Hxho-mhuceD', 'dm-uuid-CRYPT-LUKS2-4b88786507c84424981e8c33baf61cbe-A6zrFI-A863-mhHt-Ip5p-UFeD-Hxho-mhuceD'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-26 06:12:17.271302 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 06:12:17.271312 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--83c4def8--4703--5f7c--9549--7666ff9f2b66-osd--block--83c4def8--4703--5f7c--9549--7666ff9f2b66', 'dm-uuid-LVM-DoNgv1c108dy4eu1pvS7TOCWbuA3UXv0A6zrFIA863mhHtIp5pUFeDHxhomhuceD'], 'uuids': ['4b887865-07c8-4424-981e-8c33baf61cbe'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '943c088c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['A6zrFI-A863-mhHt-Ip5p-UFeD-Hxho-mhuceD']}})  2026-03-26 06:12:17.271336 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-xgZSV6-0wfE-zGZo-XmXe-xuiN-RWM0-U4VPgB', 'scsi-0QEMU_QEMU_HARDDISK_47760649-09e9-4ed8-8303-e5ee473a8102', 'scsi-SQEMU_QEMU_HARDDISK_47760649-09e9-4ed8-8303-e5ee473a8102'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '47760649', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--1fd8de68--da37--5e01--9bf2--5a04fcdcd771-osd--block--1fd8de68--da37--5e01--9bf2--5a04fcdcd771']}})  2026-03-26 06:12:17.271345 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 06:12:17.271363 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4fa924fa-33d9-43ce-b208-159d6f6ab539', 'scsi-SQEMU_QEMU_HARDDISK_4fa924fa-33d9-43ce-b208-159d6f6ab539'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '4fa924fa', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4fa924fa-33d9-43ce-b208-159d6f6ab539-part16', 'scsi-SQEMU_QEMU_HARDDISK_4fa924fa-33d9-43ce-b208-159d6f6ab539-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4fa924fa-33d9-43ce-b208-159d6f6ab539-part14', 'scsi-SQEMU_QEMU_HARDDISK_4fa924fa-33d9-43ce-b208-159d6f6ab539-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4fa924fa-33d9-43ce-b208-159d6f6ab539-part15', 'scsi-SQEMU_QEMU_HARDDISK_4fa924fa-33d9-43ce-b208-159d6f6ab539-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4fa924fa-33d9-43ce-b208-159d6f6ab539-part1', 'scsi-SQEMU_QEMU_HARDDISK_4fa924fa-33d9-43ce-b208-159d6f6ab539-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-03-26 06:12:17.271374 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 06:12:17.271382 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-03-26 06:12:17.271396 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-0bNZRn-Q3R8-1VwJ-TdPW-ktYt-RAGS-sXVFlp', 'dm-uuid-CRYPT-LUKS2-958c3d719b3b484b8cbff174ba1f6fac-0bNZRn-Q3R8-1VwJ-TdPW-ktYt-RAGS-sXVFlp'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-03-26 06:12:17.500229 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:12:17.500344 | orchestrator | 2026-03-26 06:12:17.500367 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-26 06:12:17.500411 | orchestrator | Thursday 26 March 2026 06:12:17 +0000 (0:00:01.407) 1:09:40.922 ******** 2026-03-26 06:12:17.500425 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 06:12:17.500438 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--1fd8de68--da37--5e01--9bf2--5a04fcdcd771-osd--block--1fd8de68--da37--5e01--9bf2--5a04fcdcd771', 'dm-uuid-LVM-Q7trkX6T9bQrenPM1EuezeEWG2QB7ffx0bNZRnQ3R81VwJTdPWktYtRAGSsXVFlp'], 'uuids': ['958c3d71-9b3b-484b-8cbf-f174ba1f6fac'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '47760649', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['0bNZRn-Q3R8-1VwJ-TdPW-ktYt-RAGS-sXVFlp']}}, 'ansible_loop_var': 'item'})  2026-03-26 06:12:17.500449 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8ddd7966-84e6-4951-8a08-7b4fb4af2bd2', 'scsi-SQEMU_QEMU_HARDDISK_8ddd7966-84e6-4951-8a08-7b4fb4af2bd2'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '8ddd7966', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 06:12:17.500461 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-FriUOI-gUEr-kmP0-nYC7-MoO0-ng3W-Ej90o7', 'scsi-0QEMU_QEMU_HARDDISK_943c088c-5b56-4173-ab64-ec81e1cc816d', 'scsi-SQEMU_QEMU_HARDDISK_943c088c-5b56-4173-ab64-ec81e1cc816d'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '943c088c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--83c4def8--4703--5f7c--9549--7666ff9f2b66-osd--block--83c4def8--4703--5f7c--9549--7666ff9f2b66']}}, 'ansible_loop_var': 'item'})  2026-03-26 06:12:17.500524 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 06:12:17.500553 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 06:12:17.500570 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-03-26-01-38-15-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 06:12:17.500586 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 06:12:17.500603 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-A6zrFI-A863-mhHt-Ip5p-UFeD-Hxho-mhuceD', 'dm-uuid-CRYPT-LUKS2-4b88786507c84424981e8c33baf61cbe-A6zrFI-A863-mhHt-Ip5p-UFeD-Hxho-mhuceD'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 06:12:17.500618 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 06:12:17.500646 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--83c4def8--4703--5f7c--9549--7666ff9f2b66-osd--block--83c4def8--4703--5f7c--9549--7666ff9f2b66', 'dm-uuid-LVM-DoNgv1c108dy4eu1pvS7TOCWbuA3UXv0A6zrFIA863mhHtIp5pUFeDHxhomhuceD'], 'uuids': ['4b887865-07c8-4424-981e-8c33baf61cbe'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '943c088c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['A6zrFI-A863-mhHt-Ip5p-UFeD-Hxho-mhuceD']}}, 'ansible_loop_var': 'item'})  2026-03-26 06:12:30.988608 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-xgZSV6-0wfE-zGZo-XmXe-xuiN-RWM0-U4VPgB', 'scsi-0QEMU_QEMU_HARDDISK_47760649-09e9-4ed8-8303-e5ee473a8102', 'scsi-SQEMU_QEMU_HARDDISK_47760649-09e9-4ed8-8303-e5ee473a8102'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '47760649', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--1fd8de68--da37--5e01--9bf2--5a04fcdcd771-osd--block--1fd8de68--da37--5e01--9bf2--5a04fcdcd771']}}, 'ansible_loop_var': 'item'})  2026-03-26 06:12:30.988733 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 06:12:30.988754 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4fa924fa-33d9-43ce-b208-159d6f6ab539', 'scsi-SQEMU_QEMU_HARDDISK_4fa924fa-33d9-43ce-b208-159d6f6ab539'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '4fa924fa', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4fa924fa-33d9-43ce-b208-159d6f6ab539-part16', 'scsi-SQEMU_QEMU_HARDDISK_4fa924fa-33d9-43ce-b208-159d6f6ab539-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4fa924fa-33d9-43ce-b208-159d6f6ab539-part14', 'scsi-SQEMU_QEMU_HARDDISK_4fa924fa-33d9-43ce-b208-159d6f6ab539-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4fa924fa-33d9-43ce-b208-159d6f6ab539-part15', 'scsi-SQEMU_QEMU_HARDDISK_4fa924fa-33d9-43ce-b208-159d6f6ab539-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4fa924fa-33d9-43ce-b208-159d6f6ab539-part1', 'scsi-SQEMU_QEMU_HARDDISK_4fa924fa-33d9-43ce-b208-159d6f6ab539-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 06:12:30.988811 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 06:12:30.988825 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 06:12:30.988843 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-0bNZRn-Q3R8-1VwJ-TdPW-ktYt-RAGS-sXVFlp', 'dm-uuid-CRYPT-LUKS2-958c3d719b3b484b8cbff174ba1f6fac-0bNZRn-Q3R8-1VwJ-TdPW-ktYt-RAGS-sXVFlp'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-03-26 06:12:30.988864 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:12:30.988885 | orchestrator | 2026-03-26 06:12:30.988904 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-26 06:12:30.988923 | orchestrator | Thursday 26 March 2026 06:12:18 +0000 (0:00:01.383) 1:09:42.306 ******** 2026-03-26 06:12:30.988942 | orchestrator | ok: [testbed-node-5] 2026-03-26 06:12:30.988963 | orchestrator | 2026-03-26 06:12:30.988977 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-26 06:12:30.988987 | orchestrator | Thursday 26 March 2026 06:12:20 +0000 (0:00:01.531) 1:09:43.838 ******** 2026-03-26 06:12:30.988998 | orchestrator | ok: [testbed-node-5] 2026-03-26 06:12:30.989009 | orchestrator | 2026-03-26 06:12:30.989020 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-26 06:12:30.989030 | orchestrator | Thursday 26 March 2026 06:12:21 +0000 (0:00:01.150) 1:09:44.988 ******** 2026-03-26 06:12:30.989041 | orchestrator | ok: [testbed-node-5] 2026-03-26 06:12:30.989051 | orchestrator | 2026-03-26 06:12:30.989064 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-26 06:12:30.989077 | orchestrator | Thursday 26 March 2026 06:12:22 +0000 (0:00:01.511) 1:09:46.499 ******** 2026-03-26 06:12:30.989090 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:12:30.989103 | orchestrator | 2026-03-26 06:12:30.989116 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-26 06:12:30.989128 | orchestrator | Thursday 26 March 2026 06:12:23 +0000 (0:00:01.109) 1:09:47.609 ******** 2026-03-26 06:12:30.989141 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:12:30.989153 | orchestrator | 2026-03-26 06:12:30.989165 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-26 06:12:30.989187 | orchestrator | Thursday 26 March 2026 06:12:25 +0000 (0:00:01.263) 1:09:48.873 ******** 2026-03-26 06:12:30.989200 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:12:30.989213 | orchestrator | 2026-03-26 06:12:30.989225 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-26 06:12:30.989237 | orchestrator | Thursday 26 March 2026 06:12:26 +0000 (0:00:01.189) 1:09:50.062 ******** 2026-03-26 06:12:30.989250 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-03-26 06:12:30.989269 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-03-26 06:12:30.989288 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-03-26 06:12:30.989305 | orchestrator | 2026-03-26 06:12:30.989324 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-26 06:12:30.989342 | orchestrator | Thursday 26 March 2026 06:12:28 +0000 (0:00:02.086) 1:09:52.149 ******** 2026-03-26 06:12:30.989361 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-26 06:12:30.989379 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-26 06:12:30.989398 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-26 06:12:30.989418 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:12:30.989436 | orchestrator | 2026-03-26 06:12:30.989454 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-26 06:12:30.989473 | orchestrator | Thursday 26 March 2026 06:12:29 +0000 (0:00:01.182) 1:09:53.331 ******** 2026-03-26 06:12:30.989516 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-5 2026-03-26 06:12:30.989531 | orchestrator | 2026-03-26 06:12:30.989552 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-26 06:13:12.732885 | orchestrator | Thursday 26 March 2026 06:12:30 +0000 (0:00:01.303) 1:09:54.635 ******** 2026-03-26 06:13:12.733004 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:13:12.733022 | orchestrator | 2026-03-26 06:13:12.733035 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-26 06:13:12.733047 | orchestrator | Thursday 26 March 2026 06:12:32 +0000 (0:00:01.222) 1:09:55.858 ******** 2026-03-26 06:13:12.733058 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:13:12.733069 | orchestrator | 2026-03-26 06:13:12.733080 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-26 06:13:12.733090 | orchestrator | Thursday 26 March 2026 06:12:33 +0000 (0:00:01.130) 1:09:56.988 ******** 2026-03-26 06:13:12.733101 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:13:12.733112 | orchestrator | 2026-03-26 06:13:12.733123 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-26 06:13:12.733133 | orchestrator | Thursday 26 March 2026 06:12:34 +0000 (0:00:01.151) 1:09:58.140 ******** 2026-03-26 06:13:12.733144 | orchestrator | ok: [testbed-node-5] 2026-03-26 06:13:12.733156 | orchestrator | 2026-03-26 06:13:12.733167 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-26 06:13:12.733178 | orchestrator | Thursday 26 March 2026 06:12:35 +0000 (0:00:01.247) 1:09:59.388 ******** 2026-03-26 06:13:12.733189 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-26 06:13:12.733200 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-26 06:13:12.733210 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-26 06:13:12.733221 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:13:12.733232 | orchestrator | 2026-03-26 06:13:12.733243 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-26 06:13:12.733254 | orchestrator | Thursday 26 March 2026 06:12:37 +0000 (0:00:01.441) 1:10:00.830 ******** 2026-03-26 06:13:12.733265 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-26 06:13:12.733280 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-26 06:13:12.733315 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-26 06:13:12.733327 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:13:12.733337 | orchestrator | 2026-03-26 06:13:12.733348 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-26 06:13:12.733359 | orchestrator | Thursday 26 March 2026 06:12:38 +0000 (0:00:01.414) 1:10:02.245 ******** 2026-03-26 06:13:12.733370 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-26 06:13:12.733380 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-26 06:13:12.733391 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-26 06:13:12.733401 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:13:12.733412 | orchestrator | 2026-03-26 06:13:12.733424 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-26 06:13:12.733437 | orchestrator | Thursday 26 March 2026 06:12:40 +0000 (0:00:01.423) 1:10:03.669 ******** 2026-03-26 06:13:12.733450 | orchestrator | ok: [testbed-node-5] 2026-03-26 06:13:12.733462 | orchestrator | 2026-03-26 06:13:12.733474 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-26 06:13:12.733516 | orchestrator | Thursday 26 March 2026 06:12:41 +0000 (0:00:01.177) 1:10:04.846 ******** 2026-03-26 06:13:12.733531 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-26 06:13:12.733543 | orchestrator | 2026-03-26 06:13:12.733555 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-26 06:13:12.733567 | orchestrator | Thursday 26 March 2026 06:12:42 +0000 (0:00:01.339) 1:10:06.186 ******** 2026-03-26 06:13:12.733579 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-26 06:13:12.733592 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-26 06:13:12.733605 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-26 06:13:12.733617 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-26 06:13:12.733629 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-26 06:13:12.733641 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-03-26 06:13:12.733653 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-26 06:13:12.733665 | orchestrator | 2026-03-26 06:13:12.733677 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-26 06:13:12.733689 | orchestrator | Thursday 26 March 2026 06:12:44 +0000 (0:00:02.218) 1:10:08.405 ******** 2026-03-26 06:13:12.733701 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-26 06:13:12.733713 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-26 06:13:12.733726 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-26 06:13:12.733738 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-03-26 06:13:12.733751 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-26 06:13:12.733763 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-03-26 06:13:12.733776 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-26 06:13:12.733786 | orchestrator | 2026-03-26 06:13:12.733797 | orchestrator | TASK [Stop ceph rgw when upgrading from stable-3.2] **************************** 2026-03-26 06:13:12.733808 | orchestrator | Thursday 26 March 2026 06:12:47 +0000 (0:00:02.296) 1:10:10.701 ******** 2026-03-26 06:13:12.733818 | orchestrator | changed: [testbed-node-5] 2026-03-26 06:13:12.733829 | orchestrator | 2026-03-26 06:13:12.733857 | orchestrator | TASK [Stop ceph rgw (pt. 1)] *************************************************** 2026-03-26 06:13:12.733869 | orchestrator | Thursday 26 March 2026 06:12:49 +0000 (0:00:02.048) 1:10:12.750 ******** 2026-03-26 06:13:12.733880 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-26 06:13:12.733901 | orchestrator | 2026-03-26 06:13:12.733913 | orchestrator | TASK [Stop ceph rgw (pt. 2)] *************************************************** 2026-03-26 06:13:12.733923 | orchestrator | Thursday 26 March 2026 06:12:51 +0000 (0:00:02.507) 1:10:15.258 ******** 2026-03-26 06:13:12.733934 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-26 06:13:12.733945 | orchestrator | 2026-03-26 06:13:12.733956 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-26 06:13:12.733966 | orchestrator | Thursday 26 March 2026 06:12:53 +0000 (0:00:01.940) 1:10:17.198 ******** 2026-03-26 06:13:12.733977 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-5 2026-03-26 06:13:12.733988 | orchestrator | 2026-03-26 06:13:12.733999 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-26 06:13:12.734009 | orchestrator | Thursday 26 March 2026 06:12:54 +0000 (0:00:01.092) 1:10:18.291 ******** 2026-03-26 06:13:12.734076 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-5 2026-03-26 06:13:12.734088 | orchestrator | 2026-03-26 06:13:12.734099 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-26 06:13:12.734110 | orchestrator | Thursday 26 March 2026 06:12:55 +0000 (0:00:01.144) 1:10:19.435 ******** 2026-03-26 06:13:12.734121 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:13:12.734132 | orchestrator | 2026-03-26 06:13:12.734143 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-26 06:13:12.734154 | orchestrator | Thursday 26 March 2026 06:12:56 +0000 (0:00:01.122) 1:10:20.558 ******** 2026-03-26 06:13:12.734164 | orchestrator | ok: [testbed-node-5] 2026-03-26 06:13:12.734175 | orchestrator | 2026-03-26 06:13:12.734186 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-26 06:13:12.734197 | orchestrator | Thursday 26 March 2026 06:12:58 +0000 (0:00:01.559) 1:10:22.118 ******** 2026-03-26 06:13:12.734207 | orchestrator | ok: [testbed-node-5] 2026-03-26 06:13:12.734218 | orchestrator | 2026-03-26 06:13:12.734229 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-26 06:13:12.734240 | orchestrator | Thursday 26 March 2026 06:12:59 +0000 (0:00:01.535) 1:10:23.653 ******** 2026-03-26 06:13:12.734250 | orchestrator | ok: [testbed-node-5] 2026-03-26 06:13:12.734261 | orchestrator | 2026-03-26 06:13:12.734272 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-26 06:13:12.734283 | orchestrator | Thursday 26 March 2026 06:13:01 +0000 (0:00:01.510) 1:10:25.164 ******** 2026-03-26 06:13:12.734294 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:13:12.734305 | orchestrator | 2026-03-26 06:13:12.734315 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-26 06:13:12.734326 | orchestrator | Thursday 26 March 2026 06:13:02 +0000 (0:00:01.141) 1:10:26.306 ******** 2026-03-26 06:13:12.734337 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:13:12.734347 | orchestrator | 2026-03-26 06:13:12.734358 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-26 06:13:12.734369 | orchestrator | Thursday 26 March 2026 06:13:03 +0000 (0:00:01.111) 1:10:27.417 ******** 2026-03-26 06:13:12.734380 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:13:12.734391 | orchestrator | 2026-03-26 06:13:12.734401 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-26 06:13:12.734413 | orchestrator | Thursday 26 March 2026 06:13:04 +0000 (0:00:01.175) 1:10:28.592 ******** 2026-03-26 06:13:12.734423 | orchestrator | ok: [testbed-node-5] 2026-03-26 06:13:12.734434 | orchestrator | 2026-03-26 06:13:12.734445 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-26 06:13:12.734455 | orchestrator | Thursday 26 March 2026 06:13:06 +0000 (0:00:01.522) 1:10:30.114 ******** 2026-03-26 06:13:12.734466 | orchestrator | ok: [testbed-node-5] 2026-03-26 06:13:12.734532 | orchestrator | 2026-03-26 06:13:12.734555 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-26 06:13:12.734574 | orchestrator | Thursday 26 March 2026 06:13:07 +0000 (0:00:01.497) 1:10:31.612 ******** 2026-03-26 06:13:12.734585 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:13:12.734596 | orchestrator | 2026-03-26 06:13:12.734607 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-26 06:13:12.734617 | orchestrator | Thursday 26 March 2026 06:13:08 +0000 (0:00:00.751) 1:10:32.363 ******** 2026-03-26 06:13:12.734628 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:13:12.734638 | orchestrator | 2026-03-26 06:13:12.734649 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-26 06:13:12.734660 | orchestrator | Thursday 26 March 2026 06:13:09 +0000 (0:00:00.756) 1:10:33.120 ******** 2026-03-26 06:13:12.734670 | orchestrator | ok: [testbed-node-5] 2026-03-26 06:13:12.734681 | orchestrator | 2026-03-26 06:13:12.734692 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-26 06:13:12.734702 | orchestrator | Thursday 26 March 2026 06:13:10 +0000 (0:00:00.820) 1:10:33.941 ******** 2026-03-26 06:13:12.734713 | orchestrator | ok: [testbed-node-5] 2026-03-26 06:13:12.734724 | orchestrator | 2026-03-26 06:13:12.734734 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-26 06:13:12.734745 | orchestrator | Thursday 26 March 2026 06:13:11 +0000 (0:00:00.831) 1:10:34.772 ******** 2026-03-26 06:13:12.734756 | orchestrator | ok: [testbed-node-5] 2026-03-26 06:13:12.734766 | orchestrator | 2026-03-26 06:13:12.734777 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-26 06:13:12.734788 | orchestrator | Thursday 26 March 2026 06:13:11 +0000 (0:00:00.808) 1:10:35.581 ******** 2026-03-26 06:13:12.734799 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:13:12.734809 | orchestrator | 2026-03-26 06:13:12.734829 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-26 06:13:53.451196 | orchestrator | Thursday 26 March 2026 06:13:12 +0000 (0:00:00.800) 1:10:36.382 ******** 2026-03-26 06:13:53.451316 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:13:53.451333 | orchestrator | 2026-03-26 06:13:53.451345 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-26 06:13:53.451356 | orchestrator | Thursday 26 March 2026 06:13:13 +0000 (0:00:00.804) 1:10:37.186 ******** 2026-03-26 06:13:53.451368 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:13:53.451379 | orchestrator | 2026-03-26 06:13:53.451390 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-26 06:13:53.451401 | orchestrator | Thursday 26 March 2026 06:13:14 +0000 (0:00:00.752) 1:10:37.939 ******** 2026-03-26 06:13:53.451412 | orchestrator | ok: [testbed-node-5] 2026-03-26 06:13:53.451423 | orchestrator | 2026-03-26 06:13:53.451434 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-26 06:13:53.451445 | orchestrator | Thursday 26 March 2026 06:13:15 +0000 (0:00:00.764) 1:10:38.704 ******** 2026-03-26 06:13:53.451456 | orchestrator | ok: [testbed-node-5] 2026-03-26 06:13:53.451466 | orchestrator | 2026-03-26 06:13:53.451524 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-03-26 06:13:53.451536 | orchestrator | Thursday 26 March 2026 06:13:16 +0000 (0:00:00.975) 1:10:39.679 ******** 2026-03-26 06:13:53.451547 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:13:53.451558 | orchestrator | 2026-03-26 06:13:53.451568 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-03-26 06:13:53.451579 | orchestrator | Thursday 26 March 2026 06:13:16 +0000 (0:00:00.792) 1:10:40.471 ******** 2026-03-26 06:13:53.451590 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:13:53.451600 | orchestrator | 2026-03-26 06:13:53.451611 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-03-26 06:13:53.451621 | orchestrator | Thursday 26 March 2026 06:13:17 +0000 (0:00:00.804) 1:10:41.275 ******** 2026-03-26 06:13:53.451632 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:13:53.451667 | orchestrator | 2026-03-26 06:13:53.451678 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-03-26 06:13:53.451689 | orchestrator | Thursday 26 March 2026 06:13:18 +0000 (0:00:00.793) 1:10:42.069 ******** 2026-03-26 06:13:53.451700 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:13:53.451710 | orchestrator | 2026-03-26 06:13:53.451721 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-03-26 06:13:53.451731 | orchestrator | Thursday 26 March 2026 06:13:19 +0000 (0:00:00.810) 1:10:42.880 ******** 2026-03-26 06:13:53.451743 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:13:53.451755 | orchestrator | 2026-03-26 06:13:53.451767 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-03-26 06:13:53.451779 | orchestrator | Thursday 26 March 2026 06:13:20 +0000 (0:00:00.781) 1:10:43.661 ******** 2026-03-26 06:13:53.451791 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:13:53.451803 | orchestrator | 2026-03-26 06:13:53.451815 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-03-26 06:13:53.451827 | orchestrator | Thursday 26 March 2026 06:13:20 +0000 (0:00:00.811) 1:10:44.473 ******** 2026-03-26 06:13:53.451839 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:13:53.451851 | orchestrator | 2026-03-26 06:13:53.451864 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-03-26 06:13:53.451878 | orchestrator | Thursday 26 March 2026 06:13:21 +0000 (0:00:00.763) 1:10:45.236 ******** 2026-03-26 06:13:53.451890 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:13:53.451902 | orchestrator | 2026-03-26 06:13:53.451914 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-03-26 06:13:53.451926 | orchestrator | Thursday 26 March 2026 06:13:22 +0000 (0:00:00.778) 1:10:46.015 ******** 2026-03-26 06:13:53.451938 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:13:53.451950 | orchestrator | 2026-03-26 06:13:53.451962 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-03-26 06:13:53.451973 | orchestrator | Thursday 26 March 2026 06:13:23 +0000 (0:00:00.780) 1:10:46.796 ******** 2026-03-26 06:13:53.451985 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:13:53.451998 | orchestrator | 2026-03-26 06:13:53.452011 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-03-26 06:13:53.452023 | orchestrator | Thursday 26 March 2026 06:13:23 +0000 (0:00:00.780) 1:10:47.576 ******** 2026-03-26 06:13:53.452035 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:13:53.452048 | orchestrator | 2026-03-26 06:13:53.452059 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-03-26 06:13:53.452071 | orchestrator | Thursday 26 March 2026 06:13:24 +0000 (0:00:00.779) 1:10:48.355 ******** 2026-03-26 06:13:53.452083 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:13:53.452095 | orchestrator | 2026-03-26 06:13:53.452107 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-26 06:13:53.452117 | orchestrator | Thursday 26 March 2026 06:13:25 +0000 (0:00:00.797) 1:10:49.153 ******** 2026-03-26 06:13:53.452128 | orchestrator | ok: [testbed-node-5] 2026-03-26 06:13:53.452138 | orchestrator | 2026-03-26 06:13:53.452149 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-26 06:13:53.452159 | orchestrator | Thursday 26 March 2026 06:13:27 +0000 (0:00:01.669) 1:10:50.822 ******** 2026-03-26 06:13:53.452170 | orchestrator | ok: [testbed-node-5] 2026-03-26 06:13:53.452180 | orchestrator | 2026-03-26 06:13:53.452191 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-26 06:13:53.452202 | orchestrator | Thursday 26 March 2026 06:13:29 +0000 (0:00:01.902) 1:10:52.724 ******** 2026-03-26 06:13:53.452212 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-5 2026-03-26 06:13:53.452224 | orchestrator | 2026-03-26 06:13:53.452234 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-26 06:13:53.452245 | orchestrator | Thursday 26 March 2026 06:13:30 +0000 (0:00:01.127) 1:10:53.852 ******** 2026-03-26 06:13:53.452263 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:13:53.452274 | orchestrator | 2026-03-26 06:13:53.452284 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-26 06:13:53.452312 | orchestrator | Thursday 26 March 2026 06:13:31 +0000 (0:00:01.227) 1:10:55.079 ******** 2026-03-26 06:13:53.452324 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:13:53.452335 | orchestrator | 2026-03-26 06:13:53.452346 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-26 06:13:53.452356 | orchestrator | Thursday 26 March 2026 06:13:32 +0000 (0:00:01.140) 1:10:56.220 ******** 2026-03-26 06:13:53.452367 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-26 06:13:53.452378 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-26 06:13:53.452388 | orchestrator | 2026-03-26 06:13:53.452399 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-26 06:13:53.452409 | orchestrator | Thursday 26 March 2026 06:13:34 +0000 (0:00:01.774) 1:10:57.995 ******** 2026-03-26 06:13:53.452420 | orchestrator | ok: [testbed-node-5] 2026-03-26 06:13:53.452430 | orchestrator | 2026-03-26 06:13:53.452441 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-26 06:13:53.452451 | orchestrator | Thursday 26 March 2026 06:13:35 +0000 (0:00:01.489) 1:10:59.484 ******** 2026-03-26 06:13:53.452462 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:13:53.452472 | orchestrator | 2026-03-26 06:13:53.452503 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-26 06:13:53.452514 | orchestrator | Thursday 26 March 2026 06:13:36 +0000 (0:00:01.121) 1:11:00.605 ******** 2026-03-26 06:13:53.452525 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:13:53.452535 | orchestrator | 2026-03-26 06:13:53.452546 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-26 06:13:53.452556 | orchestrator | Thursday 26 March 2026 06:13:37 +0000 (0:00:00.823) 1:11:01.429 ******** 2026-03-26 06:13:53.452567 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:13:53.452577 | orchestrator | 2026-03-26 06:13:53.452588 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-26 06:13:53.452598 | orchestrator | Thursday 26 March 2026 06:13:38 +0000 (0:00:00.769) 1:11:02.198 ******** 2026-03-26 06:13:53.452609 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-5 2026-03-26 06:13:53.452619 | orchestrator | 2026-03-26 06:13:53.452630 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-26 06:13:53.452640 | orchestrator | Thursday 26 March 2026 06:13:39 +0000 (0:00:01.153) 1:11:03.352 ******** 2026-03-26 06:13:53.452650 | orchestrator | ok: [testbed-node-5] 2026-03-26 06:13:53.452661 | orchestrator | 2026-03-26 06:13:53.452672 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-26 06:13:53.452682 | orchestrator | Thursday 26 March 2026 06:13:41 +0000 (0:00:01.901) 1:11:05.254 ******** 2026-03-26 06:13:53.452692 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-26 06:13:53.452703 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-26 06:13:53.452713 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-26 06:13:53.452724 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:13:53.452734 | orchestrator | 2026-03-26 06:13:53.452745 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-26 06:13:53.452755 | orchestrator | Thursday 26 March 2026 06:13:42 +0000 (0:00:01.154) 1:11:06.409 ******** 2026-03-26 06:13:53.452766 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:13:53.452776 | orchestrator | 2026-03-26 06:13:53.452786 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-26 06:13:53.452797 | orchestrator | Thursday 26 March 2026 06:13:43 +0000 (0:00:01.154) 1:11:07.563 ******** 2026-03-26 06:13:53.452815 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:13:53.452826 | orchestrator | 2026-03-26 06:13:53.452836 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-26 06:13:53.452847 | orchestrator | Thursday 26 March 2026 06:13:45 +0000 (0:00:01.164) 1:11:08.727 ******** 2026-03-26 06:13:53.452857 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:13:53.452867 | orchestrator | 2026-03-26 06:13:53.452878 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-26 06:13:53.452888 | orchestrator | Thursday 26 March 2026 06:13:46 +0000 (0:00:01.225) 1:11:09.953 ******** 2026-03-26 06:13:53.452899 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:13:53.452910 | orchestrator | 2026-03-26 06:13:53.452920 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-26 06:13:53.452930 | orchestrator | Thursday 26 March 2026 06:13:47 +0000 (0:00:01.149) 1:11:11.103 ******** 2026-03-26 06:13:53.452941 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:13:53.452951 | orchestrator | 2026-03-26 06:13:53.452962 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-26 06:13:53.452972 | orchestrator | Thursday 26 March 2026 06:13:48 +0000 (0:00:00.786) 1:11:11.889 ******** 2026-03-26 06:13:53.452983 | orchestrator | ok: [testbed-node-5] 2026-03-26 06:13:53.452993 | orchestrator | 2026-03-26 06:13:53.453004 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-26 06:13:53.453015 | orchestrator | Thursday 26 March 2026 06:13:50 +0000 (0:00:02.128) 1:11:14.018 ******** 2026-03-26 06:13:53.453025 | orchestrator | ok: [testbed-node-5] 2026-03-26 06:13:53.453035 | orchestrator | 2026-03-26 06:13:53.453046 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-26 06:13:53.453056 | orchestrator | Thursday 26 March 2026 06:13:51 +0000 (0:00:00.789) 1:11:14.807 ******** 2026-03-26 06:13:53.453067 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-5 2026-03-26 06:13:53.453077 | orchestrator | 2026-03-26 06:13:53.453088 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-26 06:13:53.453098 | orchestrator | Thursday 26 March 2026 06:13:52 +0000 (0:00:01.153) 1:11:15.960 ******** 2026-03-26 06:13:53.453109 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:13:53.453119 | orchestrator | 2026-03-26 06:13:53.453130 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-26 06:13:53.453147 | orchestrator | Thursday 26 March 2026 06:13:53 +0000 (0:00:01.139) 1:11:17.100 ******** 2026-03-26 06:14:35.309828 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:14:35.309952 | orchestrator | 2026-03-26 06:14:35.309971 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-26 06:14:35.309985 | orchestrator | Thursday 26 March 2026 06:13:54 +0000 (0:00:01.133) 1:11:18.233 ******** 2026-03-26 06:14:35.309996 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:14:35.310007 | orchestrator | 2026-03-26 06:14:35.310081 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-26 06:14:35.310095 | orchestrator | Thursday 26 March 2026 06:13:55 +0000 (0:00:01.304) 1:11:19.538 ******** 2026-03-26 06:14:35.310108 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:14:35.310119 | orchestrator | 2026-03-26 06:14:35.310130 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-26 06:14:35.310140 | orchestrator | Thursday 26 March 2026 06:13:57 +0000 (0:00:01.134) 1:11:20.673 ******** 2026-03-26 06:14:35.310151 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:14:35.310162 | orchestrator | 2026-03-26 06:14:35.310173 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-26 06:14:35.310184 | orchestrator | Thursday 26 March 2026 06:13:58 +0000 (0:00:01.132) 1:11:21.805 ******** 2026-03-26 06:14:35.310194 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:14:35.310205 | orchestrator | 2026-03-26 06:14:35.310216 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-26 06:14:35.310227 | orchestrator | Thursday 26 March 2026 06:13:59 +0000 (0:00:01.218) 1:11:23.023 ******** 2026-03-26 06:14:35.310263 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:14:35.310275 | orchestrator | 2026-03-26 06:14:35.310286 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-26 06:14:35.310297 | orchestrator | Thursday 26 March 2026 06:14:00 +0000 (0:00:01.210) 1:11:24.234 ******** 2026-03-26 06:14:35.310307 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:14:35.310318 | orchestrator | 2026-03-26 06:14:35.310329 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-26 06:14:35.310340 | orchestrator | Thursday 26 March 2026 06:14:01 +0000 (0:00:01.177) 1:11:25.412 ******** 2026-03-26 06:14:35.310353 | orchestrator | ok: [testbed-node-5] 2026-03-26 06:14:35.310367 | orchestrator | 2026-03-26 06:14:35.310379 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-26 06:14:35.310391 | orchestrator | Thursday 26 March 2026 06:14:02 +0000 (0:00:00.832) 1:11:26.244 ******** 2026-03-26 06:14:35.310403 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-5 2026-03-26 06:14:35.310416 | orchestrator | 2026-03-26 06:14:35.310428 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-26 06:14:35.310441 | orchestrator | Thursday 26 March 2026 06:14:03 +0000 (0:00:01.172) 1:11:27.416 ******** 2026-03-26 06:14:35.310453 | orchestrator | ok: [testbed-node-5] => (item=/etc/ceph) 2026-03-26 06:14:35.310465 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/) 2026-03-26 06:14:35.310504 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-03-26 06:14:35.310517 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-03-26 06:14:35.310529 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-03-26 06:14:35.310541 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-03-26 06:14:35.310553 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-03-26 06:14:35.310566 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-03-26 06:14:35.310578 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-26 06:14:35.310590 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-26 06:14:35.310603 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-26 06:14:35.310615 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-26 06:14:35.310627 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-26 06:14:35.310639 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-26 06:14:35.310652 | orchestrator | ok: [testbed-node-5] => (item=/var/run/ceph) 2026-03-26 06:14:35.310665 | orchestrator | ok: [testbed-node-5] => (item=/var/log/ceph) 2026-03-26 06:14:35.310677 | orchestrator | 2026-03-26 06:14:35.310689 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-26 06:14:35.310701 | orchestrator | Thursday 26 March 2026 06:14:09 +0000 (0:00:06.082) 1:11:33.499 ******** 2026-03-26 06:14:35.310711 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-5 2026-03-26 06:14:35.310722 | orchestrator | 2026-03-26 06:14:35.310733 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-03-26 06:14:35.310743 | orchestrator | Thursday 26 March 2026 06:14:10 +0000 (0:00:01.095) 1:11:34.594 ******** 2026-03-26 06:14:35.310754 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-26 06:14:35.310766 | orchestrator | 2026-03-26 06:14:35.310777 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-03-26 06:14:35.310788 | orchestrator | Thursday 26 March 2026 06:14:12 +0000 (0:00:01.634) 1:11:36.229 ******** 2026-03-26 06:14:35.310798 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-26 06:14:35.310818 | orchestrator | 2026-03-26 06:14:35.310829 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-26 06:14:35.310840 | orchestrator | Thursday 26 March 2026 06:14:14 +0000 (0:00:01.585) 1:11:37.815 ******** 2026-03-26 06:14:35.310851 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:14:35.310861 | orchestrator | 2026-03-26 06:14:35.310872 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-26 06:14:35.310901 | orchestrator | Thursday 26 March 2026 06:14:14 +0000 (0:00:00.830) 1:11:38.645 ******** 2026-03-26 06:14:35.310912 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:14:35.310923 | orchestrator | 2026-03-26 06:14:35.310934 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-26 06:14:35.310944 | orchestrator | Thursday 26 March 2026 06:14:15 +0000 (0:00:00.800) 1:11:39.446 ******** 2026-03-26 06:14:35.310955 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:14:35.310966 | orchestrator | 2026-03-26 06:14:35.310976 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-26 06:14:35.310987 | orchestrator | Thursday 26 March 2026 06:14:16 +0000 (0:00:00.870) 1:11:40.317 ******** 2026-03-26 06:14:35.310998 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:14:35.311008 | orchestrator | 2026-03-26 06:14:35.311019 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-26 06:14:35.311030 | orchestrator | Thursday 26 March 2026 06:14:17 +0000 (0:00:00.774) 1:11:41.091 ******** 2026-03-26 06:14:35.311041 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:14:35.311051 | orchestrator | 2026-03-26 06:14:35.311062 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-26 06:14:35.311073 | orchestrator | Thursday 26 March 2026 06:14:18 +0000 (0:00:00.809) 1:11:41.901 ******** 2026-03-26 06:14:35.311083 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:14:35.311094 | orchestrator | 2026-03-26 06:14:35.311105 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-26 06:14:35.311116 | orchestrator | Thursday 26 March 2026 06:14:19 +0000 (0:00:00.775) 1:11:42.677 ******** 2026-03-26 06:14:35.311126 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:14:35.311137 | orchestrator | 2026-03-26 06:14:35.311148 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-26 06:14:35.311159 | orchestrator | Thursday 26 March 2026 06:14:19 +0000 (0:00:00.779) 1:11:43.457 ******** 2026-03-26 06:14:35.311170 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:14:35.311181 | orchestrator | 2026-03-26 06:14:35.311192 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-26 06:14:35.311202 | orchestrator | Thursday 26 March 2026 06:14:20 +0000 (0:00:00.818) 1:11:44.275 ******** 2026-03-26 06:14:35.311213 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:14:35.311224 | orchestrator | 2026-03-26 06:14:35.311234 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-26 06:14:35.311245 | orchestrator | Thursday 26 March 2026 06:14:21 +0000 (0:00:00.830) 1:11:45.105 ******** 2026-03-26 06:14:35.311256 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:14:35.311266 | orchestrator | 2026-03-26 06:14:35.311277 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-26 06:14:35.311288 | orchestrator | Thursday 26 March 2026 06:14:22 +0000 (0:00:00.794) 1:11:45.900 ******** 2026-03-26 06:14:35.311298 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:14:35.311309 | orchestrator | 2026-03-26 06:14:35.311320 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-26 06:14:35.311331 | orchestrator | Thursday 26 March 2026 06:14:23 +0000 (0:00:00.788) 1:11:46.689 ******** 2026-03-26 06:14:35.311341 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] 2026-03-26 06:14:35.311352 | orchestrator | 2026-03-26 06:14:35.311363 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-26 06:14:35.311374 | orchestrator | Thursday 26 March 2026 06:14:27 +0000 (0:00:04.177) 1:11:50.866 ******** 2026-03-26 06:14:35.311416 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-26 06:14:35.311427 | orchestrator | 2026-03-26 06:14:35.311438 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-26 06:14:35.311449 | orchestrator | Thursday 26 March 2026 06:14:28 +0000 (0:00:00.932) 1:11:51.798 ******** 2026-03-26 06:14:35.311462 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}]) 2026-03-26 06:14:35.311532 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}]) 2026-03-26 06:14:35.311547 | orchestrator | 2026-03-26 06:14:35.311558 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-26 06:14:35.311569 | orchestrator | Thursday 26 March 2026 06:14:32 +0000 (0:00:04.731) 1:11:56.530 ******** 2026-03-26 06:14:35.311579 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:14:35.311590 | orchestrator | 2026-03-26 06:14:35.311600 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-26 06:14:35.311611 | orchestrator | Thursday 26 March 2026 06:14:33 +0000 (0:00:00.778) 1:11:57.308 ******** 2026-03-26 06:14:35.311621 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:14:35.311632 | orchestrator | 2026-03-26 06:14:35.311643 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-26 06:14:35.311654 | orchestrator | Thursday 26 March 2026 06:14:34 +0000 (0:00:00.838) 1:11:58.147 ******** 2026-03-26 06:14:35.311664 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:14:35.311675 | orchestrator | 2026-03-26 06:14:35.311686 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-26 06:14:35.311705 | orchestrator | Thursday 26 March 2026 06:14:35 +0000 (0:00:00.809) 1:11:58.957 ******** 2026-03-26 06:15:40.589100 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:15:40.589250 | orchestrator | 2026-03-26 06:15:40.589279 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-26 06:15:40.589300 | orchestrator | Thursday 26 March 2026 06:14:36 +0000 (0:00:00.825) 1:11:59.783 ******** 2026-03-26 06:15:40.589318 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:15:40.589335 | orchestrator | 2026-03-26 06:15:40.589353 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-26 06:15:40.589372 | orchestrator | Thursday 26 March 2026 06:14:36 +0000 (0:00:00.830) 1:12:00.613 ******** 2026-03-26 06:15:40.589390 | orchestrator | ok: [testbed-node-5] 2026-03-26 06:15:40.589410 | orchestrator | 2026-03-26 06:15:40.589429 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-26 06:15:40.589448 | orchestrator | Thursday 26 March 2026 06:14:37 +0000 (0:00:00.908) 1:12:01.521 ******** 2026-03-26 06:15:40.589527 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-26 06:15:40.589549 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-26 06:15:40.589567 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-26 06:15:40.589587 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:15:40.589607 | orchestrator | 2026-03-26 06:15:40.589626 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-26 06:15:40.589646 | orchestrator | Thursday 26 March 2026 06:14:38 +0000 (0:00:01.113) 1:12:02.635 ******** 2026-03-26 06:15:40.589666 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-26 06:15:40.589725 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-26 06:15:40.589746 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-26 06:15:40.589765 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:15:40.589780 | orchestrator | 2026-03-26 06:15:40.589792 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-26 06:15:40.589806 | orchestrator | Thursday 26 March 2026 06:14:40 +0000 (0:00:01.068) 1:12:03.703 ******** 2026-03-26 06:15:40.589819 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-26 06:15:40.589831 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-26 06:15:40.589844 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-26 06:15:40.589856 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:15:40.589868 | orchestrator | 2026-03-26 06:15:40.589881 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-26 06:15:40.589894 | orchestrator | Thursday 26 March 2026 06:14:41 +0000 (0:00:01.058) 1:12:04.762 ******** 2026-03-26 06:15:40.589906 | orchestrator | ok: [testbed-node-5] 2026-03-26 06:15:40.589924 | orchestrator | 2026-03-26 06:15:40.589943 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-26 06:15:40.589961 | orchestrator | Thursday 26 March 2026 06:14:41 +0000 (0:00:00.822) 1:12:05.584 ******** 2026-03-26 06:15:40.589980 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-26 06:15:40.589997 | orchestrator | 2026-03-26 06:15:40.590015 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-26 06:15:40.590124 | orchestrator | Thursday 26 March 2026 06:14:43 +0000 (0:00:01.579) 1:12:07.164 ******** 2026-03-26 06:15:40.590158 | orchestrator | ok: [testbed-node-5] 2026-03-26 06:15:40.590179 | orchestrator | 2026-03-26 06:15:40.590199 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-03-26 06:15:40.590220 | orchestrator | Thursday 26 March 2026 06:14:44 +0000 (0:00:01.420) 1:12:08.584 ******** 2026-03-26 06:15:40.590240 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-5 2026-03-26 06:15:40.590259 | orchestrator | 2026-03-26 06:15:40.590280 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-03-26 06:15:40.590299 | orchestrator | Thursday 26 March 2026 06:14:46 +0000 (0:00:01.146) 1:12:09.731 ******** 2026-03-26 06:15:40.590319 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-26 06:15:40.590339 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-26 06:15:40.590357 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-26 06:15:40.590375 | orchestrator | 2026-03-26 06:15:40.590394 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-03-26 06:15:40.590413 | orchestrator | Thursday 26 March 2026 06:14:49 +0000 (0:00:03.253) 1:12:12.984 ******** 2026-03-26 06:15:40.590432 | orchestrator | ok: [testbed-node-5] => (item=None) 2026-03-26 06:15:40.590450 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-26 06:15:40.590501 | orchestrator | ok: [testbed-node-5] 2026-03-26 06:15:40.590521 | orchestrator | 2026-03-26 06:15:40.590540 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-03-26 06:15:40.590558 | orchestrator | Thursday 26 March 2026 06:14:51 +0000 (0:00:01.923) 1:12:14.907 ******** 2026-03-26 06:15:40.590576 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:15:40.590594 | orchestrator | 2026-03-26 06:15:40.590612 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-03-26 06:15:40.590629 | orchestrator | Thursday 26 March 2026 06:14:52 +0000 (0:00:00.832) 1:12:15.740 ******** 2026-03-26 06:15:40.590649 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-5 2026-03-26 06:15:40.590668 | orchestrator | 2026-03-26 06:15:40.590687 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-03-26 06:15:40.590706 | orchestrator | Thursday 26 March 2026 06:14:53 +0000 (0:00:01.128) 1:12:16.868 ******** 2026-03-26 06:15:40.590741 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-26 06:15:40.590760 | orchestrator | 2026-03-26 06:15:40.590779 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-03-26 06:15:40.590798 | orchestrator | Thursday 26 March 2026 06:14:54 +0000 (0:00:01.593) 1:12:18.462 ******** 2026-03-26 06:15:40.590844 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-26 06:15:40.590866 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-26 06:15:40.590885 | orchestrator | 2026-03-26 06:15:40.590904 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-03-26 06:15:40.590923 | orchestrator | Thursday 26 March 2026 06:15:00 +0000 (0:00:05.203) 1:12:23.665 ******** 2026-03-26 06:15:40.590942 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-26 06:15:40.590960 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-26 06:15:40.590978 | orchestrator | 2026-03-26 06:15:40.590998 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-03-26 06:15:40.591017 | orchestrator | Thursday 26 March 2026 06:15:03 +0000 (0:00:03.118) 1:12:26.784 ******** 2026-03-26 06:15:40.591034 | orchestrator | ok: [testbed-node-5] => (item=None) 2026-03-26 06:15:40.591052 | orchestrator | ok: [testbed-node-5] 2026-03-26 06:15:40.591070 | orchestrator | 2026-03-26 06:15:40.591088 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-03-26 06:15:40.591107 | orchestrator | Thursday 26 March 2026 06:15:04 +0000 (0:00:01.764) 1:12:28.548 ******** 2026-03-26 06:15:40.591124 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-5 2026-03-26 06:15:40.591142 | orchestrator | 2026-03-26 06:15:40.591160 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-03-26 06:15:40.591179 | orchestrator | Thursday 26 March 2026 06:15:06 +0000 (0:00:01.134) 1:12:29.682 ******** 2026-03-26 06:15:40.591197 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-26 06:15:40.591215 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-26 06:15:40.591233 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-26 06:15:40.591251 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-26 06:15:40.591270 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-26 06:15:40.591289 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:15:40.591308 | orchestrator | 2026-03-26 06:15:40.591326 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-03-26 06:15:40.591344 | orchestrator | Thursday 26 March 2026 06:15:07 +0000 (0:00:01.604) 1:12:31.287 ******** 2026-03-26 06:15:40.591362 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-26 06:15:40.591378 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-26 06:15:40.591396 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-26 06:15:40.591413 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-26 06:15:40.591432 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-26 06:15:40.591490 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:15:40.591512 | orchestrator | 2026-03-26 06:15:40.591531 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-03-26 06:15:40.591552 | orchestrator | Thursday 26 March 2026 06:15:09 +0000 (0:00:01.619) 1:12:32.906 ******** 2026-03-26 06:15:40.591571 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-26 06:15:40.591590 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-26 06:15:40.591609 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-26 06:15:40.591630 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-26 06:15:40.591651 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-26 06:15:40.591671 | orchestrator | 2026-03-26 06:15:40.591691 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-03-26 06:15:40.591710 | orchestrator | Thursday 26 March 2026 06:15:39 +0000 (0:00:30.561) 1:13:03.468 ******** 2026-03-26 06:15:40.591729 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:15:40.591746 | orchestrator | 2026-03-26 06:15:40.591763 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-03-26 06:15:40.591793 | orchestrator | Thursday 26 March 2026 06:15:40 +0000 (0:00:00.768) 1:13:04.237 ******** 2026-03-26 06:16:33.008305 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:16:33.008417 | orchestrator | 2026-03-26 06:16:33.008435 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-03-26 06:16:33.008448 | orchestrator | Thursday 26 March 2026 06:15:41 +0000 (0:00:00.775) 1:13:05.012 ******** 2026-03-26 06:16:33.008483 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-5 2026-03-26 06:16:33.008495 | orchestrator | 2026-03-26 06:16:33.008506 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-03-26 06:16:33.008517 | orchestrator | Thursday 26 March 2026 06:15:42 +0000 (0:00:01.115) 1:13:06.128 ******** 2026-03-26 06:16:33.008528 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-5 2026-03-26 06:16:33.008539 | orchestrator | 2026-03-26 06:16:33.008550 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-03-26 06:16:33.008561 | orchestrator | Thursday 26 March 2026 06:15:43 +0000 (0:00:01.109) 1:13:07.237 ******** 2026-03-26 06:16:33.008572 | orchestrator | ok: [testbed-node-5] 2026-03-26 06:16:33.008584 | orchestrator | 2026-03-26 06:16:33.008595 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-03-26 06:16:33.008605 | orchestrator | Thursday 26 March 2026 06:15:45 +0000 (0:00:02.032) 1:13:09.270 ******** 2026-03-26 06:16:33.008616 | orchestrator | ok: [testbed-node-5] 2026-03-26 06:16:33.008627 | orchestrator | 2026-03-26 06:16:33.008638 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-03-26 06:16:33.008649 | orchestrator | Thursday 26 March 2026 06:15:47 +0000 (0:00:02.034) 1:13:11.304 ******** 2026-03-26 06:16:33.008660 | orchestrator | ok: [testbed-node-5] 2026-03-26 06:16:33.008671 | orchestrator | 2026-03-26 06:16:33.008682 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-03-26 06:16:33.008693 | orchestrator | Thursday 26 March 2026 06:15:49 +0000 (0:00:02.238) 1:13:13.543 ******** 2026-03-26 06:16:33.008704 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-26 06:16:33.008737 | orchestrator | 2026-03-26 06:16:33.008750 | orchestrator | PLAY [Upgrade ceph rbd mirror node] ******************************************** 2026-03-26 06:16:33.008760 | orchestrator | skipping: no hosts matched 2026-03-26 06:16:33.008771 | orchestrator | 2026-03-26 06:16:33.008782 | orchestrator | PLAY [Upgrade ceph nfs node] *************************************************** 2026-03-26 06:16:33.008793 | orchestrator | skipping: no hosts matched 2026-03-26 06:16:33.008803 | orchestrator | 2026-03-26 06:16:33.008814 | orchestrator | PLAY [Upgrade ceph client node] ************************************************ 2026-03-26 06:16:33.008825 | orchestrator | skipping: no hosts matched 2026-03-26 06:16:33.008835 | orchestrator | 2026-03-26 06:16:33.008847 | orchestrator | PLAY [Upgrade ceph-crash daemons] ********************************************** 2026-03-26 06:16:33.008859 | orchestrator | 2026-03-26 06:16:33.008871 | orchestrator | TASK [Stop the ceph-crash service] ********************************************* 2026-03-26 06:16:33.008884 | orchestrator | Thursday 26 March 2026 06:15:54 +0000 (0:00:04.262) 1:13:17.806 ******** 2026-03-26 06:16:33.008896 | orchestrator | changed: [testbed-node-0] 2026-03-26 06:16:33.008908 | orchestrator | changed: [testbed-node-1] 2026-03-26 06:16:33.008920 | orchestrator | changed: [testbed-node-2] 2026-03-26 06:16:33.008933 | orchestrator | changed: [testbed-node-3] 2026-03-26 06:16:33.008945 | orchestrator | changed: [testbed-node-4] 2026-03-26 06:16:33.008957 | orchestrator | changed: [testbed-node-5] 2026-03-26 06:16:33.008970 | orchestrator | 2026-03-26 06:16:33.008982 | orchestrator | TASK [Mask and disable the ceph-crash service] ********************************* 2026-03-26 06:16:33.008995 | orchestrator | Thursday 26 March 2026 06:15:56 +0000 (0:00:02.563) 1:13:20.369 ******** 2026-03-26 06:16:33.009007 | orchestrator | changed: [testbed-node-0] 2026-03-26 06:16:33.009019 | orchestrator | changed: [testbed-node-1] 2026-03-26 06:16:33.009032 | orchestrator | changed: [testbed-node-3] 2026-03-26 06:16:33.009044 | orchestrator | changed: [testbed-node-2] 2026-03-26 06:16:33.009056 | orchestrator | changed: [testbed-node-4] 2026-03-26 06:16:33.009069 | orchestrator | changed: [testbed-node-5] 2026-03-26 06:16:33.009082 | orchestrator | 2026-03-26 06:16:33.009093 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-26 06:16:33.009104 | orchestrator | Thursday 26 March 2026 06:16:00 +0000 (0:00:03.343) 1:13:23.713 ******** 2026-03-26 06:16:33.009115 | orchestrator | ok: [testbed-node-0] 2026-03-26 06:16:33.009126 | orchestrator | ok: [testbed-node-1] 2026-03-26 06:16:33.009137 | orchestrator | ok: [testbed-node-2] 2026-03-26 06:16:33.009147 | orchestrator | ok: [testbed-node-3] 2026-03-26 06:16:33.009158 | orchestrator | ok: [testbed-node-4] 2026-03-26 06:16:33.009169 | orchestrator | ok: [testbed-node-5] 2026-03-26 06:16:33.009180 | orchestrator | 2026-03-26 06:16:33.009191 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-26 06:16:33.009201 | orchestrator | Thursday 26 March 2026 06:16:02 +0000 (0:00:02.461) 1:13:26.174 ******** 2026-03-26 06:16:33.009212 | orchestrator | ok: [testbed-node-0] 2026-03-26 06:16:33.009223 | orchestrator | ok: [testbed-node-1] 2026-03-26 06:16:33.009233 | orchestrator | ok: [testbed-node-2] 2026-03-26 06:16:33.009244 | orchestrator | ok: [testbed-node-3] 2026-03-26 06:16:33.009255 | orchestrator | ok: [testbed-node-4] 2026-03-26 06:16:33.009274 | orchestrator | ok: [testbed-node-5] 2026-03-26 06:16:33.009292 | orchestrator | 2026-03-26 06:16:33.009310 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-26 06:16:33.009339 | orchestrator | Thursday 26 March 2026 06:16:04 +0000 (0:00:02.407) 1:13:28.582 ******** 2026-03-26 06:16:33.009359 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-26 06:16:33.009378 | orchestrator | 2026-03-26 06:16:33.009396 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-26 06:16:33.009415 | orchestrator | Thursday 26 March 2026 06:16:07 +0000 (0:00:02.093) 1:13:30.676 ******** 2026-03-26 06:16:33.009433 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-26 06:16:33.009506 | orchestrator | 2026-03-26 06:16:33.009537 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-26 06:16:33.009549 | orchestrator | Thursday 26 March 2026 06:16:09 +0000 (0:00:02.227) 1:13:32.903 ******** 2026-03-26 06:16:33.009560 | orchestrator | ok: [testbed-node-0] 2026-03-26 06:16:33.009571 | orchestrator | ok: [testbed-node-1] 2026-03-26 06:16:33.009581 | orchestrator | ok: [testbed-node-2] 2026-03-26 06:16:33.009593 | orchestrator | skipping: [testbed-node-3] 2026-03-26 06:16:33.009603 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:16:33.009614 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:16:33.009625 | orchestrator | 2026-03-26 06:16:33.009636 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-26 06:16:33.009647 | orchestrator | Thursday 26 March 2026 06:16:11 +0000 (0:00:02.322) 1:13:35.226 ******** 2026-03-26 06:16:33.009657 | orchestrator | skipping: [testbed-node-0] 2026-03-26 06:16:33.009668 | orchestrator | skipping: [testbed-node-1] 2026-03-26 06:16:33.009679 | orchestrator | skipping: [testbed-node-2] 2026-03-26 06:16:33.009689 | orchestrator | ok: [testbed-node-3] 2026-03-26 06:16:33.009700 | orchestrator | ok: [testbed-node-4] 2026-03-26 06:16:33.009711 | orchestrator | ok: [testbed-node-5] 2026-03-26 06:16:33.009721 | orchestrator | 2026-03-26 06:16:33.009732 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-26 06:16:33.009743 | orchestrator | Thursday 26 March 2026 06:16:13 +0000 (0:00:02.097) 1:13:37.323 ******** 2026-03-26 06:16:33.009753 | orchestrator | skipping: [testbed-node-0] 2026-03-26 06:16:33.009764 | orchestrator | skipping: [testbed-node-1] 2026-03-26 06:16:33.009775 | orchestrator | skipping: [testbed-node-2] 2026-03-26 06:16:33.009785 | orchestrator | ok: [testbed-node-3] 2026-03-26 06:16:33.009796 | orchestrator | ok: [testbed-node-4] 2026-03-26 06:16:33.009807 | orchestrator | ok: [testbed-node-5] 2026-03-26 06:16:33.009818 | orchestrator | 2026-03-26 06:16:33.009837 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-26 06:16:33.009854 | orchestrator | Thursday 26 March 2026 06:16:16 +0000 (0:00:02.526) 1:13:39.850 ******** 2026-03-26 06:16:33.009871 | orchestrator | skipping: [testbed-node-0] 2026-03-26 06:16:33.009890 | orchestrator | skipping: [testbed-node-1] 2026-03-26 06:16:33.009911 | orchestrator | skipping: [testbed-node-2] 2026-03-26 06:16:33.009929 | orchestrator | ok: [testbed-node-3] 2026-03-26 06:16:33.009947 | orchestrator | ok: [testbed-node-4] 2026-03-26 06:16:33.009966 | orchestrator | ok: [testbed-node-5] 2026-03-26 06:16:33.009983 | orchestrator | 2026-03-26 06:16:33.010002 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-26 06:16:33.010065 | orchestrator | Thursday 26 March 2026 06:16:18 +0000 (0:00:02.109) 1:13:41.960 ******** 2026-03-26 06:16:33.010077 | orchestrator | skipping: [testbed-node-3] 2026-03-26 06:16:33.010099 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:16:33.010109 | orchestrator | ok: [testbed-node-0] 2026-03-26 06:16:33.010120 | orchestrator | ok: [testbed-node-1] 2026-03-26 06:16:33.010131 | orchestrator | ok: [testbed-node-2] 2026-03-26 06:16:33.010141 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:16:33.010152 | orchestrator | 2026-03-26 06:16:33.010163 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-26 06:16:33.010174 | orchestrator | Thursday 26 March 2026 06:16:20 +0000 (0:00:02.500) 1:13:44.461 ******** 2026-03-26 06:16:33.010245 | orchestrator | skipping: [testbed-node-0] 2026-03-26 06:16:33.010259 | orchestrator | skipping: [testbed-node-1] 2026-03-26 06:16:33.010270 | orchestrator | skipping: [testbed-node-2] 2026-03-26 06:16:33.010281 | orchestrator | skipping: [testbed-node-3] 2026-03-26 06:16:33.010292 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:16:33.010310 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:16:33.010331 | orchestrator | 2026-03-26 06:16:33.010350 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-26 06:16:33.010383 | orchestrator | Thursday 26 March 2026 06:16:22 +0000 (0:00:01.845) 1:13:46.307 ******** 2026-03-26 06:16:33.010403 | orchestrator | skipping: [testbed-node-0] 2026-03-26 06:16:33.010496 | orchestrator | skipping: [testbed-node-1] 2026-03-26 06:16:33.010511 | orchestrator | skipping: [testbed-node-2] 2026-03-26 06:16:33.010522 | orchestrator | skipping: [testbed-node-3] 2026-03-26 06:16:33.010533 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:16:33.010544 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:16:33.010555 | orchestrator | 2026-03-26 06:16:33.010566 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-26 06:16:33.010576 | orchestrator | Thursday 26 March 2026 06:16:24 +0000 (0:00:01.755) 1:13:48.062 ******** 2026-03-26 06:16:33.010587 | orchestrator | ok: [testbed-node-0] 2026-03-26 06:16:33.010598 | orchestrator | ok: [testbed-node-1] 2026-03-26 06:16:33.010609 | orchestrator | ok: [testbed-node-2] 2026-03-26 06:16:33.010619 | orchestrator | ok: [testbed-node-3] 2026-03-26 06:16:33.010630 | orchestrator | ok: [testbed-node-4] 2026-03-26 06:16:33.010641 | orchestrator | ok: [testbed-node-5] 2026-03-26 06:16:33.010651 | orchestrator | 2026-03-26 06:16:33.010662 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-26 06:16:33.010673 | orchestrator | Thursday 26 March 2026 06:16:26 +0000 (0:00:02.545) 1:13:50.608 ******** 2026-03-26 06:16:33.010684 | orchestrator | ok: [testbed-node-0] 2026-03-26 06:16:33.010695 | orchestrator | ok: [testbed-node-1] 2026-03-26 06:16:33.010705 | orchestrator | ok: [testbed-node-2] 2026-03-26 06:16:33.010716 | orchestrator | ok: [testbed-node-3] 2026-03-26 06:16:33.010726 | orchestrator | ok: [testbed-node-4] 2026-03-26 06:16:33.010737 | orchestrator | ok: [testbed-node-5] 2026-03-26 06:16:33.010748 | orchestrator | 2026-03-26 06:16:33.010759 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-26 06:16:33.010770 | orchestrator | Thursday 26 March 2026 06:16:28 +0000 (0:00:02.034) 1:13:52.643 ******** 2026-03-26 06:16:33.010781 | orchestrator | skipping: [testbed-node-0] 2026-03-26 06:16:33.010792 | orchestrator | skipping: [testbed-node-1] 2026-03-26 06:16:33.010802 | orchestrator | skipping: [testbed-node-2] 2026-03-26 06:16:33.010813 | orchestrator | skipping: [testbed-node-3] 2026-03-26 06:16:33.010823 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:16:33.010834 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:16:33.010845 | orchestrator | 2026-03-26 06:16:33.010856 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-26 06:16:33.010867 | orchestrator | Thursday 26 March 2026 06:16:31 +0000 (0:00:02.197) 1:13:54.841 ******** 2026-03-26 06:16:33.010878 | orchestrator | ok: [testbed-node-0] 2026-03-26 06:16:33.010889 | orchestrator | ok: [testbed-node-1] 2026-03-26 06:16:33.010899 | orchestrator | ok: [testbed-node-2] 2026-03-26 06:16:33.010910 | orchestrator | skipping: [testbed-node-3] 2026-03-26 06:16:33.010920 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:16:33.010931 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:16:33.010949 | orchestrator | 2026-03-26 06:16:33.010981 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-26 06:17:29.181952 | orchestrator | Thursday 26 March 2026 06:16:32 +0000 (0:00:01.813) 1:13:56.654 ******** 2026-03-26 06:17:29.182104 | orchestrator | skipping: [testbed-node-0] 2026-03-26 06:17:29.182118 | orchestrator | skipping: [testbed-node-1] 2026-03-26 06:17:29.182127 | orchestrator | skipping: [testbed-node-2] 2026-03-26 06:17:29.182135 | orchestrator | ok: [testbed-node-3] 2026-03-26 06:17:29.182144 | orchestrator | ok: [testbed-node-4] 2026-03-26 06:17:29.182152 | orchestrator | ok: [testbed-node-5] 2026-03-26 06:17:29.182160 | orchestrator | 2026-03-26 06:17:29.182168 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-26 06:17:29.182177 | orchestrator | Thursday 26 March 2026 06:16:35 +0000 (0:00:02.039) 1:13:58.694 ******** 2026-03-26 06:17:29.182185 | orchestrator | skipping: [testbed-node-0] 2026-03-26 06:17:29.182192 | orchestrator | skipping: [testbed-node-1] 2026-03-26 06:17:29.182200 | orchestrator | skipping: [testbed-node-2] 2026-03-26 06:17:29.182228 | orchestrator | ok: [testbed-node-3] 2026-03-26 06:17:29.182237 | orchestrator | ok: [testbed-node-4] 2026-03-26 06:17:29.182244 | orchestrator | ok: [testbed-node-5] 2026-03-26 06:17:29.182252 | orchestrator | 2026-03-26 06:17:29.182260 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-26 06:17:29.182268 | orchestrator | Thursday 26 March 2026 06:16:36 +0000 (0:00:01.825) 1:14:00.520 ******** 2026-03-26 06:17:29.182275 | orchestrator | skipping: [testbed-node-0] 2026-03-26 06:17:29.182283 | orchestrator | skipping: [testbed-node-1] 2026-03-26 06:17:29.182291 | orchestrator | skipping: [testbed-node-2] 2026-03-26 06:17:29.182298 | orchestrator | ok: [testbed-node-3] 2026-03-26 06:17:29.182306 | orchestrator | ok: [testbed-node-4] 2026-03-26 06:17:29.182314 | orchestrator | ok: [testbed-node-5] 2026-03-26 06:17:29.182322 | orchestrator | 2026-03-26 06:17:29.182329 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-26 06:17:29.182337 | orchestrator | Thursday 26 March 2026 06:16:38 +0000 (0:00:02.062) 1:14:02.582 ******** 2026-03-26 06:17:29.182345 | orchestrator | skipping: [testbed-node-0] 2026-03-26 06:17:29.182353 | orchestrator | skipping: [testbed-node-1] 2026-03-26 06:17:29.182360 | orchestrator | skipping: [testbed-node-2] 2026-03-26 06:17:29.182368 | orchestrator | skipping: [testbed-node-3] 2026-03-26 06:17:29.182375 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:17:29.182383 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:17:29.182390 | orchestrator | 2026-03-26 06:17:29.182398 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-26 06:17:29.182406 | orchestrator | Thursday 26 March 2026 06:16:40 +0000 (0:00:01.830) 1:14:04.412 ******** 2026-03-26 06:17:29.182414 | orchestrator | skipping: [testbed-node-0] 2026-03-26 06:17:29.182421 | orchestrator | skipping: [testbed-node-1] 2026-03-26 06:17:29.182429 | orchestrator | skipping: [testbed-node-2] 2026-03-26 06:17:29.182436 | orchestrator | skipping: [testbed-node-3] 2026-03-26 06:17:29.182467 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:17:29.182476 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:17:29.182484 | orchestrator | 2026-03-26 06:17:29.182491 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-26 06:17:29.182499 | orchestrator | Thursday 26 March 2026 06:16:42 +0000 (0:00:02.115) 1:14:06.528 ******** 2026-03-26 06:17:29.182507 | orchestrator | ok: [testbed-node-0] 2026-03-26 06:17:29.182514 | orchestrator | ok: [testbed-node-1] 2026-03-26 06:17:29.182522 | orchestrator | ok: [testbed-node-2] 2026-03-26 06:17:29.182530 | orchestrator | skipping: [testbed-node-3] 2026-03-26 06:17:29.182537 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:17:29.182545 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:17:29.182553 | orchestrator | 2026-03-26 06:17:29.182560 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-26 06:17:29.182568 | orchestrator | Thursday 26 March 2026 06:16:44 +0000 (0:00:01.837) 1:14:08.365 ******** 2026-03-26 06:17:29.182575 | orchestrator | ok: [testbed-node-0] 2026-03-26 06:17:29.182583 | orchestrator | ok: [testbed-node-1] 2026-03-26 06:17:29.182591 | orchestrator | ok: [testbed-node-2] 2026-03-26 06:17:29.182598 | orchestrator | ok: [testbed-node-3] 2026-03-26 06:17:29.182606 | orchestrator | ok: [testbed-node-4] 2026-03-26 06:17:29.182613 | orchestrator | ok: [testbed-node-5] 2026-03-26 06:17:29.182621 | orchestrator | 2026-03-26 06:17:29.182628 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-26 06:17:29.182636 | orchestrator | Thursday 26 March 2026 06:16:46 +0000 (0:00:01.850) 1:14:10.215 ******** 2026-03-26 06:17:29.182644 | orchestrator | ok: [testbed-node-0] 2026-03-26 06:17:29.182651 | orchestrator | ok: [testbed-node-1] 2026-03-26 06:17:29.182659 | orchestrator | ok: [testbed-node-2] 2026-03-26 06:17:29.182666 | orchestrator | ok: [testbed-node-3] 2026-03-26 06:17:29.182674 | orchestrator | ok: [testbed-node-4] 2026-03-26 06:17:29.182681 | orchestrator | ok: [testbed-node-5] 2026-03-26 06:17:29.182689 | orchestrator | 2026-03-26 06:17:29.182697 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-03-26 06:17:29.182711 | orchestrator | Thursday 26 March 2026 06:16:48 +0000 (0:00:02.318) 1:14:12.533 ******** 2026-03-26 06:17:29.182719 | orchestrator | ok: [testbed-node-0] 2026-03-26 06:17:29.182727 | orchestrator | 2026-03-26 06:17:29.182734 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-03-26 06:17:29.182742 | orchestrator | Thursday 26 March 2026 06:16:51 +0000 (0:00:03.045) 1:14:15.579 ******** 2026-03-26 06:17:29.182750 | orchestrator | ok: [testbed-node-0] 2026-03-26 06:17:29.182757 | orchestrator | 2026-03-26 06:17:29.182765 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-03-26 06:17:29.182773 | orchestrator | Thursday 26 March 2026 06:16:55 +0000 (0:00:03.509) 1:14:19.088 ******** 2026-03-26 06:17:29.182780 | orchestrator | ok: [testbed-node-0] 2026-03-26 06:17:29.182788 | orchestrator | ok: [testbed-node-1] 2026-03-26 06:17:29.182795 | orchestrator | ok: [testbed-node-2] 2026-03-26 06:17:29.182803 | orchestrator | ok: [testbed-node-3] 2026-03-26 06:17:29.182810 | orchestrator | ok: [testbed-node-4] 2026-03-26 06:17:29.182818 | orchestrator | ok: [testbed-node-5] 2026-03-26 06:17:29.182825 | orchestrator | 2026-03-26 06:17:29.182833 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-03-26 06:17:29.182841 | orchestrator | Thursday 26 March 2026 06:16:57 +0000 (0:00:02.520) 1:14:21.608 ******** 2026-03-26 06:17:29.182848 | orchestrator | ok: [testbed-node-0] 2026-03-26 06:17:29.182856 | orchestrator | ok: [testbed-node-1] 2026-03-26 06:17:29.182863 | orchestrator | ok: [testbed-node-2] 2026-03-26 06:17:29.182871 | orchestrator | ok: [testbed-node-3] 2026-03-26 06:17:29.182879 | orchestrator | ok: [testbed-node-4] 2026-03-26 06:17:29.182886 | orchestrator | ok: [testbed-node-5] 2026-03-26 06:17:29.182894 | orchestrator | 2026-03-26 06:17:29.182902 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-03-26 06:17:29.182924 | orchestrator | Thursday 26 March 2026 06:17:00 +0000 (0:00:02.064) 1:14:23.672 ******** 2026-03-26 06:17:29.182933 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-26 06:17:29.182942 | orchestrator | 2026-03-26 06:17:29.182950 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-03-26 06:17:29.182958 | orchestrator | Thursday 26 March 2026 06:17:02 +0000 (0:00:02.694) 1:14:26.367 ******** 2026-03-26 06:17:29.182966 | orchestrator | ok: [testbed-node-0] 2026-03-26 06:17:29.182973 | orchestrator | ok: [testbed-node-1] 2026-03-26 06:17:29.182981 | orchestrator | ok: [testbed-node-2] 2026-03-26 06:17:29.182988 | orchestrator | ok: [testbed-node-3] 2026-03-26 06:17:29.182996 | orchestrator | ok: [testbed-node-4] 2026-03-26 06:17:29.183003 | orchestrator | ok: [testbed-node-5] 2026-03-26 06:17:29.183011 | orchestrator | 2026-03-26 06:17:29.183018 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-03-26 06:17:29.183028 | orchestrator | Thursday 26 March 2026 06:17:05 +0000 (0:00:02.909) 1:14:29.276 ******** 2026-03-26 06:17:29.183041 | orchestrator | changed: [testbed-node-3] 2026-03-26 06:17:29.183054 | orchestrator | changed: [testbed-node-1] 2026-03-26 06:17:29.183067 | orchestrator | changed: [testbed-node-4] 2026-03-26 06:17:29.183083 | orchestrator | changed: [testbed-node-0] 2026-03-26 06:17:29.183101 | orchestrator | changed: [testbed-node-5] 2026-03-26 06:17:29.183113 | orchestrator | changed: [testbed-node-2] 2026-03-26 06:17:29.183126 | orchestrator | 2026-03-26 06:17:29.183138 | orchestrator | PLAY [Complete upgrade] ******************************************************** 2026-03-26 06:17:29.183150 | orchestrator | 2026-03-26 06:17:29.183164 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-26 06:17:29.183177 | orchestrator | Thursday 26 March 2026 06:17:10 +0000 (0:00:04.816) 1:14:34.093 ******** 2026-03-26 06:17:29.183190 | orchestrator | ok: [testbed-node-0] 2026-03-26 06:17:29.183203 | orchestrator | ok: [testbed-node-1] 2026-03-26 06:17:29.183216 | orchestrator | ok: [testbed-node-2] 2026-03-26 06:17:29.183228 | orchestrator | 2026-03-26 06:17:29.183240 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-26 06:17:29.183266 | orchestrator | Thursday 26 March 2026 06:17:12 +0000 (0:00:01.712) 1:14:35.806 ******** 2026-03-26 06:17:29.183282 | orchestrator | ok: [testbed-node-0] 2026-03-26 06:17:29.183295 | orchestrator | ok: [testbed-node-1] 2026-03-26 06:17:29.183309 | orchestrator | ok: [testbed-node-2] 2026-03-26 06:17:29.183323 | orchestrator | 2026-03-26 06:17:29.183337 | orchestrator | TASK [Container | disallow pre-reef OSDs and enable all new reef-only functionality] *** 2026-03-26 06:17:29.183352 | orchestrator | Thursday 26 March 2026 06:17:13 +0000 (0:00:01.604) 1:14:37.410 ******** 2026-03-26 06:17:29.183366 | orchestrator | ok: [testbed-node-0] 2026-03-26 06:17:29.183378 | orchestrator | 2026-03-26 06:17:29.183391 | orchestrator | TASK [Non container | disallow pre-reef OSDs and enable all new reef-only functionality] *** 2026-03-26 06:17:29.183405 | orchestrator | Thursday 26 March 2026 06:17:16 +0000 (0:00:02.350) 1:14:39.761 ******** 2026-03-26 06:17:29.183417 | orchestrator | skipping: [testbed-node-0] 2026-03-26 06:17:29.183430 | orchestrator | 2026-03-26 06:17:29.183483 | orchestrator | PLAY [Upgrade node-exporter] *************************************************** 2026-03-26 06:17:29.183497 | orchestrator | 2026-03-26 06:17:29.183511 | orchestrator | TASK [Stop node-exporter] ****************************************************** 2026-03-26 06:17:29.183525 | orchestrator | Thursday 26 March 2026 06:17:18 +0000 (0:00:01.941) 1:14:41.702 ******** 2026-03-26 06:17:29.183538 | orchestrator | skipping: [testbed-node-0] 2026-03-26 06:17:29.183552 | orchestrator | skipping: [testbed-node-1] 2026-03-26 06:17:29.183566 | orchestrator | skipping: [testbed-node-2] 2026-03-26 06:17:29.183581 | orchestrator | skipping: [testbed-node-3] 2026-03-26 06:17:29.183596 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:17:29.183610 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:17:29.183624 | orchestrator | skipping: [testbed-manager] 2026-03-26 06:17:29.183633 | orchestrator | 2026-03-26 06:17:29.183641 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-26 06:17:29.183650 | orchestrator | Thursday 26 March 2026 06:17:20 +0000 (0:00:02.558) 1:14:44.261 ******** 2026-03-26 06:17:29.183660 | orchestrator | skipping: [testbed-node-0] 2026-03-26 06:17:29.183675 | orchestrator | skipping: [testbed-node-1] 2026-03-26 06:17:29.183689 | orchestrator | skipping: [testbed-node-2] 2026-03-26 06:17:29.183705 | orchestrator | skipping: [testbed-node-3] 2026-03-26 06:17:29.183719 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:17:29.183733 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:17:29.183748 | orchestrator | skipping: [testbed-manager] 2026-03-26 06:17:29.183762 | orchestrator | 2026-03-26 06:17:29.183777 | orchestrator | TASK [ceph-container-engine : Include pre_requisites/prerequisites.yml] ******** 2026-03-26 06:17:29.183791 | orchestrator | Thursday 26 March 2026 06:17:23 +0000 (0:00:02.581) 1:14:46.843 ******** 2026-03-26 06:17:29.183806 | orchestrator | skipping: [testbed-node-0] 2026-03-26 06:17:29.183821 | orchestrator | skipping: [testbed-node-1] 2026-03-26 06:17:29.183835 | orchestrator | skipping: [testbed-node-2] 2026-03-26 06:17:29.183850 | orchestrator | skipping: [testbed-node-3] 2026-03-26 06:17:29.183860 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:17:29.183868 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:17:29.183877 | orchestrator | skipping: [testbed-manager] 2026-03-26 06:17:29.183890 | orchestrator | 2026-03-26 06:17:29.183905 | orchestrator | TASK [ceph-container-common : Container registry authentication] *************** 2026-03-26 06:17:29.183920 | orchestrator | Thursday 26 March 2026 06:17:25 +0000 (0:00:02.498) 1:14:49.341 ******** 2026-03-26 06:17:29.183936 | orchestrator | skipping: [testbed-node-0] 2026-03-26 06:17:29.183950 | orchestrator | skipping: [testbed-node-1] 2026-03-26 06:17:29.183963 | orchestrator | skipping: [testbed-node-2] 2026-03-26 06:17:29.183977 | orchestrator | skipping: [testbed-node-3] 2026-03-26 06:17:29.183991 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:17:29.184004 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:17:29.184018 | orchestrator | skipping: [testbed-manager] 2026-03-26 06:17:29.184033 | orchestrator | 2026-03-26 06:17:29.184046 | orchestrator | TASK [ceph-node-exporter : Include setup_container.yml] ************************ 2026-03-26 06:17:29.184072 | orchestrator | Thursday 26 March 2026 06:17:28 +0000 (0:00:02.578) 1:14:51.920 ******** 2026-03-26 06:17:29.184086 | orchestrator | skipping: [testbed-node-0] 2026-03-26 06:17:29.184099 | orchestrator | skipping: [testbed-node-1] 2026-03-26 06:17:29.184114 | orchestrator | skipping: [testbed-node-2] 2026-03-26 06:17:29.184142 | orchestrator | skipping: [testbed-node-3] 2026-03-26 06:18:19.932081 | orchestrator | skipping: [testbed-node-4] 2026-03-26 06:18:19.932191 | orchestrator | skipping: [testbed-node-5] 2026-03-26 06:18:19.932204 | orchestrator | skipping: [testbed-manager] 2026-03-26 06:18:19.932215 | orchestrator | 2026-03-26 06:18:19.932225 | orchestrator | PLAY [Upgrade monitoring node] ************************************************* 2026-03-26 06:18:19.932236 | orchestrator | 2026-03-26 06:18:19.932246 | orchestrator | TASK [Stop monitoring services] ************************************************ 2026-03-26 06:18:19.932256 | orchestrator | Thursday 26 March 2026 06:17:31 +0000 (0:00:02.982) 1:14:54.903 ******** 2026-03-26 06:18:19.932266 | orchestrator | skipping: [testbed-manager] => (item=alertmanager)  2026-03-26 06:18:19.932277 | orchestrator | skipping: [testbed-manager] => (item=prometheus)  2026-03-26 06:18:19.932292 | orchestrator | skipping: [testbed-manager] => (item=grafana-server)  2026-03-26 06:18:19.932309 | orchestrator | skipping: [testbed-manager] 2026-03-26 06:18:19.932325 | orchestrator | 2026-03-26 06:18:19.932343 | orchestrator | TASK [ceph-facts : Set grafana_server_addr fact - ipv4] ************************ 2026-03-26 06:18:19.932358 | orchestrator | Thursday 26 March 2026 06:17:32 +0000 (0:00:01.282) 1:14:56.185 ******** 2026-03-26 06:18:19.932374 | orchestrator | skipping: [testbed-manager] 2026-03-26 06:18:19.932390 | orchestrator | 2026-03-26 06:18:19.932406 | orchestrator | TASK [ceph-facts : Set grafana_server_addr fact - ipv6] ************************ 2026-03-26 06:18:19.932423 | orchestrator | Thursday 26 March 2026 06:17:33 +0000 (0:00:01.158) 1:14:57.344 ******** 2026-03-26 06:18:19.932534 | orchestrator | skipping: [testbed-manager] 2026-03-26 06:18:19.932554 | orchestrator | 2026-03-26 06:18:19.932569 | orchestrator | TASK [ceph-facts : Set grafana_server_addrs fact - ipv4] *********************** 2026-03-26 06:18:19.932585 | orchestrator | Thursday 26 March 2026 06:17:34 +0000 (0:00:01.175) 1:14:58.520 ******** 2026-03-26 06:18:19.932600 | orchestrator | skipping: [testbed-manager] 2026-03-26 06:18:19.932615 | orchestrator | 2026-03-26 06:18:19.932631 | orchestrator | TASK [ceph-facts : Set grafana_server_addrs fact - ipv6] *********************** 2026-03-26 06:18:19.932648 | orchestrator | Thursday 26 March 2026 06:17:35 +0000 (0:00:01.124) 1:14:59.645 ******** 2026-03-26 06:18:19.932668 | orchestrator | skipping: [testbed-manager] 2026-03-26 06:18:19.932685 | orchestrator | 2026-03-26 06:18:19.932702 | orchestrator | TASK [ceph-prometheus : Create prometheus directories] ************************* 2026-03-26 06:18:19.932719 | orchestrator | Thursday 26 March 2026 06:17:37 +0000 (0:00:01.176) 1:15:00.821 ******** 2026-03-26 06:18:19.932735 | orchestrator | skipping: [testbed-manager] => (item=/etc/prometheus)  2026-03-26 06:18:19.932752 | orchestrator | skipping: [testbed-manager] => (item=/var/lib/prometheus)  2026-03-26 06:18:19.932771 | orchestrator | skipping: [testbed-manager] 2026-03-26 06:18:19.932788 | orchestrator | 2026-03-26 06:18:19.932805 | orchestrator | TASK [ceph-prometheus : Write prometheus config file] ************************** 2026-03-26 06:18:19.932818 | orchestrator | Thursday 26 March 2026 06:17:38 +0000 (0:00:01.121) 1:15:01.943 ******** 2026-03-26 06:18:19.932830 | orchestrator | skipping: [testbed-manager] 2026-03-26 06:18:19.932840 | orchestrator | 2026-03-26 06:18:19.932851 | orchestrator | TASK [ceph-prometheus : Make sure the alerting rules directory exists] ********* 2026-03-26 06:18:19.932863 | orchestrator | Thursday 26 March 2026 06:17:39 +0000 (0:00:01.148) 1:15:03.091 ******** 2026-03-26 06:18:19.932873 | orchestrator | skipping: [testbed-manager] 2026-03-26 06:18:19.932884 | orchestrator | 2026-03-26 06:18:19.932896 | orchestrator | TASK [ceph-prometheus : Copy alerting rules] *********************************** 2026-03-26 06:18:19.932906 | orchestrator | Thursday 26 March 2026 06:17:40 +0000 (0:00:01.112) 1:15:04.204 ******** 2026-03-26 06:18:19.932917 | orchestrator | skipping: [testbed-manager] 2026-03-26 06:18:19.932951 | orchestrator | 2026-03-26 06:18:19.932962 | orchestrator | TASK [ceph-prometheus : Create alertmanager directories] *********************** 2026-03-26 06:18:19.932973 | orchestrator | Thursday 26 March 2026 06:17:41 +0000 (0:00:01.343) 1:15:05.548 ******** 2026-03-26 06:18:19.932987 | orchestrator | skipping: [testbed-manager] => (item=/etc/alertmanager)  2026-03-26 06:18:19.933004 | orchestrator | skipping: [testbed-manager] => (item=/var/lib/alertmanager)  2026-03-26 06:18:19.933020 | orchestrator | skipping: [testbed-manager] 2026-03-26 06:18:19.933036 | orchestrator | 2026-03-26 06:18:19.933053 | orchestrator | TASK [ceph-prometheus : Write alertmanager config file] ************************ 2026-03-26 06:18:19.933070 | orchestrator | Thursday 26 March 2026 06:17:43 +0000 (0:00:01.136) 1:15:06.684 ******** 2026-03-26 06:18:19.933086 | orchestrator | skipping: [testbed-manager] 2026-03-26 06:18:19.933102 | orchestrator | 2026-03-26 06:18:19.933112 | orchestrator | TASK [ceph-prometheus : Include setup_container.yml] *************************** 2026-03-26 06:18:19.933121 | orchestrator | Thursday 26 March 2026 06:17:44 +0000 (0:00:01.103) 1:15:07.788 ******** 2026-03-26 06:18:19.933131 | orchestrator | skipping: [testbed-manager] 2026-03-26 06:18:19.933140 | orchestrator | 2026-03-26 06:18:19.933150 | orchestrator | TASK [ceph-grafana : Include setup_container.yml] ****************************** 2026-03-26 06:18:19.933159 | orchestrator | Thursday 26 March 2026 06:17:45 +0000 (0:00:01.103) 1:15:08.891 ******** 2026-03-26 06:18:19.933169 | orchestrator | skipping: [testbed-manager] 2026-03-26 06:18:19.933182 | orchestrator | 2026-03-26 06:18:19.933198 | orchestrator | TASK [ceph-grafana : Include configure_grafana.yml] **************************** 2026-03-26 06:18:19.933215 | orchestrator | Thursday 26 March 2026 06:17:46 +0000 (0:00:01.116) 1:15:10.008 ******** 2026-03-26 06:18:19.933231 | orchestrator | skipping: [testbed-manager] 2026-03-26 06:18:19.933247 | orchestrator | 2026-03-26 06:18:19.933262 | orchestrator | PLAY [Upgrade ceph dashboard] ************************************************** 2026-03-26 06:18:19.933278 | orchestrator | 2026-03-26 06:18:19.933295 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-26 06:18:19.933311 | orchestrator | Thursday 26 March 2026 06:17:47 +0000 (0:00:01.641) 1:15:11.649 ******** 2026-03-26 06:18:19.933327 | orchestrator | skipping: [testbed-node-0] 2026-03-26 06:18:19.933344 | orchestrator | skipping: [testbed-node-1] 2026-03-26 06:18:19.933360 | orchestrator | skipping: [testbed-node-2] 2026-03-26 06:18:19.933375 | orchestrator | 2026-03-26 06:18:19.933392 | orchestrator | TASK [ceph-facts : Set grafana_server_addr fact - ipv4] ************************ 2026-03-26 06:18:19.933407 | orchestrator | Thursday 26 March 2026 06:17:49 +0000 (0:00:01.793) 1:15:13.443 ******** 2026-03-26 06:18:19.933424 | orchestrator | skipping: [testbed-node-0] 2026-03-26 06:18:19.933460 | orchestrator | skipping: [testbed-node-1] 2026-03-26 06:18:19.933491 | orchestrator | skipping: [testbed-node-2] 2026-03-26 06:18:19.933501 | orchestrator | 2026-03-26 06:18:19.933511 | orchestrator | TASK [ceph-facts : Set grafana_server_addr fact - ipv6] ************************ 2026-03-26 06:18:19.933520 | orchestrator | Thursday 26 March 2026 06:17:51 +0000 (0:00:01.527) 1:15:14.970 ******** 2026-03-26 06:18:19.933529 | orchestrator | skipping: [testbed-node-0] 2026-03-26 06:18:19.933539 | orchestrator | skipping: [testbed-node-1] 2026-03-26 06:18:19.933548 | orchestrator | skipping: [testbed-node-2] 2026-03-26 06:18:19.933557 | orchestrator | 2026-03-26 06:18:19.933567 | orchestrator | TASK [ceph-facts : Set grafana_server_addrs fact - ipv4] *********************** 2026-03-26 06:18:19.933576 | orchestrator | Thursday 26 March 2026 06:17:52 +0000 (0:00:01.430) 1:15:16.401 ******** 2026-03-26 06:18:19.933586 | orchestrator | skipping: [testbed-node-0] 2026-03-26 06:18:19.933595 | orchestrator | skipping: [testbed-node-1] 2026-03-26 06:18:19.933604 | orchestrator | skipping: [testbed-node-2] 2026-03-26 06:18:19.933613 | orchestrator | 2026-03-26 06:18:19.933623 | orchestrator | TASK [ceph-facts : Set grafana_server_addrs fact - ipv6] *********************** 2026-03-26 06:18:19.933632 | orchestrator | Thursday 26 March 2026 06:17:54 +0000 (0:00:01.759) 1:15:18.160 ******** 2026-03-26 06:18:19.933641 | orchestrator | skipping: [testbed-node-0] 2026-03-26 06:18:19.933661 | orchestrator | skipping: [testbed-node-1] 2026-03-26 06:18:19.933670 | orchestrator | skipping: [testbed-node-2] 2026-03-26 06:18:19.933680 | orchestrator | 2026-03-26 06:18:19.933689 | orchestrator | TASK [ceph-dashboard : Include configure_dashboard.yml] ************************ 2026-03-26 06:18:19.933699 | orchestrator | Thursday 26 March 2026 06:17:55 +0000 (0:00:01.437) 1:15:19.597 ******** 2026-03-26 06:18:19.933708 | orchestrator | skipping: [testbed-node-0] 2026-03-26 06:18:19.933717 | orchestrator | skipping: [testbed-node-1] 2026-03-26 06:18:19.933727 | orchestrator | skipping: [testbed-node-2] 2026-03-26 06:18:19.933737 | orchestrator | 2026-03-26 06:18:19.933746 | orchestrator | TASK [ceph-dashboard : Print dashboard URL] ************************************ 2026-03-26 06:18:19.933756 | orchestrator | Thursday 26 March 2026 06:17:57 +0000 (0:00:01.397) 1:15:20.995 ******** 2026-03-26 06:18:19.933765 | orchestrator | skipping: [testbed-node-0] 2026-03-26 06:18:19.933775 | orchestrator | 2026-03-26 06:18:19.933784 | orchestrator | PLAY [Switch any existing crush buckets to straw2] ***************************** 2026-03-26 06:18:19.933793 | orchestrator | 2026-03-26 06:18:19.933803 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-26 06:18:19.933812 | orchestrator | Thursday 26 March 2026 06:17:59 +0000 (0:00:02.089) 1:15:23.085 ******** 2026-03-26 06:18:19.933822 | orchestrator | ok: [testbed-node-0] 2026-03-26 06:18:19.933832 | orchestrator | 2026-03-26 06:18:19.933841 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-26 06:18:19.933851 | orchestrator | Thursday 26 March 2026 06:18:00 +0000 (0:00:01.526) 1:15:24.611 ******** 2026-03-26 06:18:19.933860 | orchestrator | ok: [testbed-node-0] 2026-03-26 06:18:19.933870 | orchestrator | 2026-03-26 06:18:19.933879 | orchestrator | TASK [Set_fact ceph_cmd] ******************************************************* 2026-03-26 06:18:19.933888 | orchestrator | Thursday 26 March 2026 06:18:02 +0000 (0:00:01.243) 1:15:25.854 ******** 2026-03-26 06:18:19.933897 | orchestrator | ok: [testbed-node-0] 2026-03-26 06:18:19.933907 | orchestrator | 2026-03-26 06:18:19.933916 | orchestrator | TASK [Backup the crushmap] ***************************************************** 2026-03-26 06:18:19.933925 | orchestrator | Thursday 26 March 2026 06:18:03 +0000 (0:00:01.114) 1:15:26.969 ******** 2026-03-26 06:18:19.933935 | orchestrator | ok: [testbed-node-0] 2026-03-26 06:18:19.933944 | orchestrator | 2026-03-26 06:18:19.933953 | orchestrator | TASK [Switch crush buckets to straw2] ****************************************** 2026-03-26 06:18:19.933963 | orchestrator | Thursday 26 March 2026 06:18:06 +0000 (0:00:02.960) 1:15:29.929 ******** 2026-03-26 06:18:19.933972 | orchestrator | ok: [testbed-node-0] 2026-03-26 06:18:19.933981 | orchestrator | 2026-03-26 06:18:19.933991 | orchestrator | TASK [Remove crushmap backup] ************************************************** 2026-03-26 06:18:19.934000 | orchestrator | Thursday 26 March 2026 06:18:09 +0000 (0:00:03.295) 1:15:33.225 ******** 2026-03-26 06:18:19.934010 | orchestrator | changed: [testbed-node-0] 2026-03-26 06:18:19.934075 | orchestrator | 2026-03-26 06:18:19.934086 | orchestrator | PLAY [Show ceph status] ******************************************************** 2026-03-26 06:18:19.934095 | orchestrator | 2026-03-26 06:18:19.934105 | orchestrator | TASK [Set_fact container_exec_cmd_status] ************************************** 2026-03-26 06:18:19.934114 | orchestrator | Thursday 26 March 2026 06:18:11 +0000 (0:00:01.963) 1:15:35.189 ******** 2026-03-26 06:18:19.934124 | orchestrator | ok: [testbed-node-0] 2026-03-26 06:18:19.934133 | orchestrator | ok: [testbed-node-2] 2026-03-26 06:18:19.934143 | orchestrator | ok: [testbed-node-1] 2026-03-26 06:18:19.934152 | orchestrator | 2026-03-26 06:18:19.934161 | orchestrator | TASK [Show ceph status] ******************************************************** 2026-03-26 06:18:19.934171 | orchestrator | Thursday 26 March 2026 06:18:13 +0000 (0:00:02.269) 1:15:37.458 ******** 2026-03-26 06:18:19.934180 | orchestrator | ok: [testbed-node-0] 2026-03-26 06:18:19.934190 | orchestrator | 2026-03-26 06:18:19.934199 | orchestrator | TASK [Show all daemons version] ************************************************ 2026-03-26 06:18:19.934209 | orchestrator | Thursday 26 March 2026 06:18:16 +0000 (0:00:02.338) 1:15:39.797 ******** 2026-03-26 06:18:19.934218 | orchestrator | ok: [testbed-node-0] 2026-03-26 06:18:19.934237 | orchestrator | 2026-03-26 06:18:19.934247 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-26 06:18:19.934257 | orchestrator | localhost : ok=0 changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-26 06:18:19.934268 | orchestrator | testbed-manager : ok=25  changed=1  unreachable=0 failed=0 skipped=76  rescued=0 ignored=0 2026-03-26 06:18:19.934280 | orchestrator | testbed-node-0 : ok=248  changed=20  unreachable=0 failed=0 skipped=376  rescued=0 ignored=0 2026-03-26 06:18:19.934289 | orchestrator | testbed-node-1 : ok=191  changed=16  unreachable=0 failed=0 skipped=350  rescued=0 ignored=0 2026-03-26 06:18:19.934307 | orchestrator | testbed-node-2 : ok=196  changed=15  unreachable=0 failed=0 skipped=351  rescued=0 ignored=0 2026-03-26 06:18:21.103821 | orchestrator | testbed-node-3 : ok=317  changed=21  unreachable=0 failed=0 skipped=362  rescued=0 ignored=0 2026-03-26 06:18:21.103919 | orchestrator | testbed-node-4 : ok=307  changed=18  unreachable=0 failed=0 skipped=359  rescued=0 ignored=0 2026-03-26 06:18:21.103932 | orchestrator | testbed-node-5 : ok=303  changed=18  unreachable=0 failed=0 skipped=344  rescued=0 ignored=0 2026-03-26 06:18:21.103943 | orchestrator | 2026-03-26 06:18:21.103954 | orchestrator | 2026-03-26 06:18:21.103963 | orchestrator | 2026-03-26 06:18:21.103973 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-26 06:18:21.103984 | orchestrator | Thursday 26 March 2026 06:18:19 +0000 (0:00:03.761) 1:15:43.559 ******** 2026-03-26 06:18:21.103993 | orchestrator | =============================================================================== 2026-03-26 06:18:21.104003 | orchestrator | Disable pg autoscale on pools ------------------------------------------ 74.54s 2026-03-26 06:18:21.104013 | orchestrator | Re-enable pg autoscale on pools ---------------------------------------- 74.51s 2026-03-26 06:18:21.104022 | orchestrator | Gather and delegate facts ---------------------------------------------- 31.50s 2026-03-26 06:18:21.104032 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 31.45s 2026-03-26 06:18:21.104041 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 31.26s 2026-03-26 06:18:21.104051 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 30.56s 2026-03-26 06:18:21.104060 | orchestrator | Waiting for clean pgs... ----------------------------------------------- 29.82s 2026-03-26 06:18:21.104069 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 28.66s 2026-03-26 06:18:21.104079 | orchestrator | ceph-config : Set config to cluster ------------------------------------ 23.07s 2026-03-26 06:18:21.104088 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 23.01s 2026-03-26 06:18:21.104097 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 22.82s 2026-03-26 06:18:21.104107 | orchestrator | Stop ceph mgr ---------------------------------------------------------- 17.87s 2026-03-26 06:18:21.104116 | orchestrator | Create potentially missing keys (rbd and rbd-mirror) ------------------- 16.74s 2026-03-26 06:18:21.104125 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 15.47s 2026-03-26 06:18:21.104135 | orchestrator | ceph-config : Set config to cluster ------------------------------------ 14.42s 2026-03-26 06:18:21.104144 | orchestrator | ceph-config : Set osd_memory_target to cluster host config ------------- 12.82s 2026-03-26 06:18:21.104153 | orchestrator | ceph-config : Set osd_memory_target to cluster host config ------------- 12.81s 2026-03-26 06:18:21.104163 | orchestrator | Stop ceph osd ---------------------------------------------------------- 12.80s 2026-03-26 06:18:21.104172 | orchestrator | Stop standby ceph mds -------------------------------------------------- 11.17s 2026-03-26 06:18:21.104204 | orchestrator | Stop ceph mon ---------------------------------------------------------- 10.37s 2026-03-26 06:18:21.477242 | orchestrator | + osism apply cephclient 2026-03-26 06:18:23.670312 | orchestrator | 2026-03-26 06:18:23 | INFO  | Task 21ea1124-23b4-48c3-93da-984335f8d483 (cephclient) was prepared for execution. 2026-03-26 06:18:23.670410 | orchestrator | 2026-03-26 06:18:23 | INFO  | It takes a moment until task 21ea1124-23b4-48c3-93da-984335f8d483 (cephclient) has been started and output is visible here. 2026-03-26 06:18:44.348477 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-03-26 06:18:44.348584 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-03-26 06:18:44.348609 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-03-26 06:18:44.348619 | orchestrator | (): 'NoneType' object is not subscriptable 2026-03-26 06:18:44.348638 | orchestrator | 2026-03-26 06:18:44.348649 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-03-26 06:18:44.348658 | orchestrator | 2026-03-26 06:18:44.348668 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-03-26 06:18:44.348678 | orchestrator | Thursday 26 March 2026 06:18:31 +0000 (0:00:02.472) 0:00:02.472 ******** 2026-03-26 06:18:44.348688 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-03-26 06:18:44.348699 | orchestrator | 2026-03-26 06:18:44.348708 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-03-26 06:18:44.348718 | orchestrator | Thursday 26 March 2026 06:18:32 +0000 (0:00:00.801) 0:00:03.273 ******** 2026-03-26 06:18:44.348727 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-03-26 06:18:44.348737 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient/data) 2026-03-26 06:18:44.348747 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-03-26 06:18:44.348757 | orchestrator | 2026-03-26 06:18:44.348766 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-03-26 06:18:44.348776 | orchestrator | Thursday 26 March 2026 06:18:33 +0000 (0:00:01.721) 0:00:04.995 ******** 2026-03-26 06:18:44.348785 | orchestrator | ok: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-03-26 06:18:44.348795 | orchestrator | 2026-03-26 06:18:44.348804 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-03-26 06:18:44.348814 | orchestrator | Thursday 26 March 2026 06:18:34 +0000 (0:00:01.120) 0:00:06.116 ******** 2026-03-26 06:18:44.348823 | orchestrator | ok: [testbed-manager] 2026-03-26 06:18:44.348833 | orchestrator | 2026-03-26 06:18:44.348843 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-03-26 06:18:44.348852 | orchestrator | Thursday 26 March 2026 06:18:35 +0000 (0:00:00.932) 0:00:07.049 ******** 2026-03-26 06:18:44.348862 | orchestrator | ok: [testbed-manager] 2026-03-26 06:18:44.348872 | orchestrator | 2026-03-26 06:18:44.348881 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-03-26 06:18:44.348891 | orchestrator | Thursday 26 March 2026 06:18:36 +0000 (0:00:00.951) 0:00:08.001 ******** 2026-03-26 06:18:44.348900 | orchestrator | ok: [testbed-manager] 2026-03-26 06:18:44.348910 | orchestrator | 2026-03-26 06:18:44.348919 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-03-26 06:18:44.348928 | orchestrator | Thursday 26 March 2026 06:18:38 +0000 (0:00:01.275) 0:00:09.276 ******** 2026-03-26 06:18:44.348938 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-03-26 06:18:44.348948 | orchestrator | ok: [testbed-manager] => (item=ceph-authtool) 2026-03-26 06:18:44.348957 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-03-26 06:18:44.348989 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-03-26 06:18:44.349001 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-03-26 06:18:44.349012 | orchestrator | 2026-03-26 06:18:44.349023 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-03-26 06:18:44.349034 | orchestrator | Thursday 26 March 2026 06:18:42 +0000 (0:00:04.160) 0:00:13.437 ******** 2026-03-26 06:18:44.349044 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-03-26 06:18:44.349055 | orchestrator | 2026-03-26 06:18:44.349066 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-03-26 06:18:44.349076 | orchestrator | Thursday 26 March 2026 06:18:42 +0000 (0:00:00.508) 0:00:13.946 ******** 2026-03-26 06:18:44.349087 | orchestrator | skipping: [testbed-manager] 2026-03-26 06:18:44.349098 | orchestrator | 2026-03-26 06:18:44.349109 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-03-26 06:18:44.349120 | orchestrator | Thursday 26 March 2026 06:18:42 +0000 (0:00:00.152) 0:00:14.098 ******** 2026-03-26 06:18:44.349130 | orchestrator | skipping: [testbed-manager] 2026-03-26 06:18:44.349141 | orchestrator | 2026-03-26 06:18:44.349152 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-26 06:18:44.349163 | orchestrator | testbed-manager : ok=8  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-26 06:18:44.349174 | orchestrator | 2026-03-26 06:18:44.349185 | orchestrator | 2026-03-26 06:18:44.349195 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-26 06:18:44.349206 | orchestrator | Thursday 26 March 2026 06:18:44 +0000 (0:00:01.113) 0:00:15.211 ******** 2026-03-26 06:18:44.349218 | orchestrator | =============================================================================== 2026-03-26 06:18:44.349228 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.16s 2026-03-26 06:18:44.349239 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.72s 2026-03-26 06:18:44.349249 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------- 1.28s 2026-03-26 06:18:44.349260 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.12s 2026-03-26 06:18:44.349271 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 1.11s 2026-03-26 06:18:44.349282 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.95s 2026-03-26 06:18:44.349309 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.93s 2026-03-26 06:18:44.349320 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.80s 2026-03-26 06:18:44.349331 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.51s 2026-03-26 06:18:44.349341 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.15s 2026-03-26 06:18:44.664415 | orchestrator | + [[ false == \f\a\l\s\e ]] 2026-03-26 06:18:44.664533 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/300-openstack.sh 2026-03-26 06:18:44.671607 | orchestrator | + set -e 2026-03-26 06:18:44.672623 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-26 06:18:44.672729 | orchestrator | ++ export INTERACTIVE=false 2026-03-26 06:18:44.672748 | orchestrator | ++ INTERACTIVE=false 2026-03-26 06:18:44.672759 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-26 06:18:44.672769 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-26 06:18:44.672780 | orchestrator | + source /opt/manager-vars.sh 2026-03-26 06:18:44.672791 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-26 06:18:44.672802 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-26 06:18:44.672812 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-26 06:18:44.672823 | orchestrator | ++ CEPH_VERSION=reef 2026-03-26 06:18:44.672840 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-26 06:18:44.672858 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-26 06:18:44.672877 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-03-26 06:18:44.672897 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-03-26 06:18:44.672916 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-26 06:18:44.672935 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-26 06:18:44.672982 | orchestrator | ++ export ARA=false 2026-03-26 06:18:44.672995 | orchestrator | ++ ARA=false 2026-03-26 06:18:44.673005 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-26 06:18:44.673016 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-26 06:18:44.673027 | orchestrator | ++ export TEMPEST=false 2026-03-26 06:18:44.673037 | orchestrator | ++ TEMPEST=false 2026-03-26 06:18:44.673047 | orchestrator | ++ export IS_ZUUL=true 2026-03-26 06:18:44.673058 | orchestrator | ++ IS_ZUUL=true 2026-03-26 06:18:44.673068 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.54 2026-03-26 06:18:44.673079 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.54 2026-03-26 06:18:44.673090 | orchestrator | ++ export EXTERNAL_API=false 2026-03-26 06:18:44.673101 | orchestrator | ++ EXTERNAL_API=false 2026-03-26 06:18:44.673111 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-26 06:18:44.673122 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-26 06:18:44.673132 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-26 06:18:44.673143 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-26 06:18:44.673153 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-26 06:18:44.673163 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-26 06:18:44.673174 | orchestrator | ++ export RABBITMQ3TO4=true 2026-03-26 06:18:44.673184 | orchestrator | ++ RABBITMQ3TO4=true 2026-03-26 06:18:44.673195 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-03-26 06:18:44.673213 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-03-26 06:18:44.677897 | orchestrator | ++ export MANAGER_VERSION=10.0.0-rc.1 2026-03-26 06:18:44.677923 | orchestrator | ++ MANAGER_VERSION=10.0.0-rc.1 2026-03-26 06:18:44.677934 | orchestrator | + [[ true == \t\r\u\e ]] 2026-03-26 06:18:44.677945 | orchestrator | + osism migrate rabbitmq3to4 prepare 2026-03-26 06:19:06.443395 | orchestrator | 2026-03-26 06:19:06 | ERROR  | Unable to get ansible vault password 2026-03-26 06:19:06.443540 | orchestrator | 2026-03-26 06:19:06 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-26 06:19:06.443564 | orchestrator | 2026-03-26 06:19:06 | ERROR  | Dropping encrypted entries 2026-03-26 06:19:06.484983 | orchestrator | 2026-03-26 06:19:06 | INFO  | Connecting to RabbitMQ Management API at 192.168.16.10:15672 (node: testbed-node-0) as openstack... 2026-03-26 06:19:06.485188 | orchestrator | 2026-03-26 06:19:06 | INFO  | Kolla configuration check passed 2026-03-26 06:19:06.674806 | orchestrator | 2026-03-26 06:19:06 | INFO  | Created vhost 'openstack' with default_queue_type=quorum 2026-03-26 06:19:06.692729 | orchestrator | 2026-03-26 06:19:06 | INFO  | Set permissions for user 'openstack' on vhost 'openstack' 2026-03-26 06:19:06.900075 | orchestrator | + osism migrate rabbitmq3to4 list 2026-03-26 06:19:25.861380 | orchestrator | 2026-03-26 06:19:25 | ERROR  | Unable to get ansible vault password 2026-03-26 06:19:25.861491 | orchestrator | 2026-03-26 06:19:25 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-03-26 06:19:25.861507 | orchestrator | 2026-03-26 06:19:25 | ERROR  | Dropping encrypted entries 2026-03-26 06:19:25.922855 | orchestrator | 2026-03-26 06:19:25 | INFO  | Connecting to RabbitMQ Management API at 192.168.16.10:15672 (node: testbed-node-0) as openstack... 2026-03-26 06:19:26.064522 | orchestrator | 2026-03-26 06:19:26 | INFO  | Found 208 classic queue(s) in vhost '/': 2026-03-26 06:19:26.064703 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - alarm.all.sample (vhost: /, messages: 0) 2026-03-26 06:19:26.064735 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - alarming.sample (vhost: /, messages: 0) 2026-03-26 06:19:26.064748 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - barbican.workers (vhost: /, messages: 0) 2026-03-26 06:19:26.064771 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - barbican.workers.barbican.queue (vhost: /, messages: 0) 2026-03-26 06:19:26.064941 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - barbican.workers_fanout_b4e210291c1f4595a8110eea41f3b718 (vhost: /, messages: 0) 2026-03-26 06:19:26.065078 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - barbican.workers_fanout_e5817fbc18e24736ae19aea2dd46443c (vhost: /, messages: 0) 2026-03-26 06:19:26.065417 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - barbican.workers_fanout_fb5c5d01a8bc44969546e21abf2484cc (vhost: /, messages: 0) 2026-03-26 06:19:26.065546 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - barbican_notifications.info (vhost: /, messages: 0) 2026-03-26 06:19:26.066409 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - central (vhost: /, messages: 0) 2026-03-26 06:19:26.066557 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - central.testbed-node-0 (vhost: /, messages: 0) 2026-03-26 06:19:26.066574 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - central.testbed-node-1 (vhost: /, messages: 0) 2026-03-26 06:19:26.066600 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - central.testbed-node-2 (vhost: /, messages: 0) 2026-03-26 06:19:26.066617 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - central_fanout_35733cd31394488d9e3db0d42e40e34c (vhost: /, messages: 0) 2026-03-26 06:19:26.066813 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - central_fanout_3c6a3e144f2243b2bc8f0af87b8fdf99 (vhost: /, messages: 0) 2026-03-26 06:19:26.067137 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - central_fanout_3e97b3e6bc4c46eab56fa9a43109ebd7 (vhost: /, messages: 0) 2026-03-26 06:19:26.067244 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - central_fanout_6bd3253d59d0488c8ecbab32dee32db2 (vhost: /, messages: 0) 2026-03-26 06:19:26.067544 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - central_fanout_82cf3e0940194386804159a753c7460c (vhost: /, messages: 0) 2026-03-26 06:19:26.067565 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - central_fanout_d4f9292219c84761a99423dd67f1398c (vhost: /, messages: 0) 2026-03-26 06:19:26.067976 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - cinder-backup (vhost: /, messages: 0) 2026-03-26 06:19:26.067997 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - cinder-backup.testbed-node-0 (vhost: /, messages: 0) 2026-03-26 06:19:26.068318 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - cinder-backup.testbed-node-1 (vhost: /, messages: 0) 2026-03-26 06:19:26.068575 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - cinder-backup.testbed-node-2 (vhost: /, messages: 0) 2026-03-26 06:19:26.068725 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - cinder-backup_fanout_8f5f559e775243a98c2c1f750404a4f1 (vhost: /, messages: 0) 2026-03-26 06:19:26.069156 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - cinder-backup_fanout_bf6b6bf9a62148dea1244fab3e97b501 (vhost: /, messages: 0) 2026-03-26 06:19:26.069177 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - cinder-backup_fanout_f2d658f5fc36455a8931e7580182289c (vhost: /, messages: 0) 2026-03-26 06:19:26.069188 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - cinder-scheduler (vhost: /, messages: 0) 2026-03-26 06:19:26.069575 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - cinder-scheduler.testbed-node-0 (vhost: /, messages: 0) 2026-03-26 06:19:26.069597 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - cinder-scheduler.testbed-node-1 (vhost: /, messages: 0) 2026-03-26 06:19:26.069900 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - cinder-scheduler.testbed-node-2 (vhost: /, messages: 0) 2026-03-26 06:19:26.069919 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - cinder-scheduler_fanout_7eb0367bc74f44d2b4625e36bc5d642d (vhost: /, messages: 0) 2026-03-26 06:19:26.070266 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - cinder-scheduler_fanout_d047f442bc244458a150b5609e4af3ec (vhost: /, messages: 0) 2026-03-26 06:19:26.070289 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - cinder-scheduler_fanout_f872bbe8a39c496c82652ebaf402b14a (vhost: /, messages: 0) 2026-03-26 06:19:26.070606 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - cinder-volume (vhost: /, messages: 0) 2026-03-26 06:19:26.070714 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - cinder-volume.testbed-node-0@rbd-volumes (vhost: /, messages: 0) 2026-03-26 06:19:26.071021 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - cinder-volume.testbed-node-0@rbd-volumes.testbed-node-0 (vhost: /, messages: 0) 2026-03-26 06:19:26.071185 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - cinder-volume.testbed-node-0@rbd-volumes_fanout_a6533171c6994adf9bcce0cd216bf57c (vhost: /, messages: 0) 2026-03-26 06:19:26.071324 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - cinder-volume.testbed-node-1@rbd-volumes (vhost: /, messages: 0) 2026-03-26 06:19:26.071558 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - cinder-volume.testbed-node-1@rbd-volumes.testbed-node-1 (vhost: /, messages: 0) 2026-03-26 06:19:26.071841 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - cinder-volume.testbed-node-1@rbd-volumes_fanout_b4bcfca53a99458ab09955313468afc6 (vhost: /, messages: 0) 2026-03-26 06:19:26.071859 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - cinder-volume.testbed-node-2@rbd-volumes (vhost: /, messages: 0) 2026-03-26 06:19:26.072204 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - cinder-volume.testbed-node-2@rbd-volumes.testbed-node-2 (vhost: /, messages: 0) 2026-03-26 06:19:26.072224 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - cinder-volume.testbed-node-2@rbd-volumes_fanout_ade417b8c94e4319896a910815b50d48 (vhost: /, messages: 0) 2026-03-26 06:19:26.072363 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - cinder-volume_fanout_029b93b2eb2647bebe3b03dbcaf16cff (vhost: /, messages: 0) 2026-03-26 06:19:26.072724 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - cinder-volume_fanout_8c9dd967d35c4794ac8b90812623d27b (vhost: /, messages: 0) 2026-03-26 06:19:26.072747 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - cinder-volume_fanout_8fd2c7d0d57d479098d7dc9f2a8a7324 (vhost: /, messages: 0) 2026-03-26 06:19:26.072983 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - compute (vhost: /, messages: 0) 2026-03-26 06:19:26.073010 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - compute.testbed-node-3 (vhost: /, messages: 0) 2026-03-26 06:19:26.073183 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - compute.testbed-node-4 (vhost: /, messages: 0) 2026-03-26 06:19:26.073322 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - compute.testbed-node-5 (vhost: /, messages: 0) 2026-03-26 06:19:26.073555 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - compute_fanout_1633d9b6ca8d45e3980c3aa8b6d1479f (vhost: /, messages: 0) 2026-03-26 06:19:26.073645 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - compute_fanout_2ed9f7af58614a9fac21fe0ebf645431 (vhost: /, messages: 0) 2026-03-26 06:19:26.073905 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - compute_fanout_dce43fa4b04f4a428d9422d5117ae051 (vhost: /, messages: 0) 2026-03-26 06:19:26.073973 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - conductor (vhost: /, messages: 0) 2026-03-26 06:19:26.074959 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - conductor.testbed-node-0 (vhost: /, messages: 0) 2026-03-26 06:19:26.075028 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - conductor.testbed-node-1 (vhost: /, messages: 0) 2026-03-26 06:19:26.075410 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - conductor.testbed-node-2 (vhost: /, messages: 0) 2026-03-26 06:19:26.075451 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - conductor_fanout_0a087e21fbc9400eaca4d0d8c28e18e1 (vhost: /, messages: 0) 2026-03-26 06:19:26.075614 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - conductor_fanout_2779a7fe25ce4c3292b21b1ec783f5a8 (vhost: /, messages: 0) 2026-03-26 06:19:26.075689 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - conductor_fanout_3807aa7d35c84957b24855e76e40df5c (vhost: /, messages: 0) 2026-03-26 06:19:26.076040 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - conductor_fanout_39ddbc7364ef4e6093e49dc1f29da98e (vhost: /, messages: 0) 2026-03-26 06:19:26.076349 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - conductor_fanout_3a1d92acabb3468e9eeb34fd1571f679 (vhost: /, messages: 0) 2026-03-26 06:19:26.076502 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - conductor_fanout_434589943aa54e9c8edd5825fd358e9e (vhost: /, messages: 0) 2026-03-26 06:19:26.076750 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - event.sample (vhost: /, messages: 9) 2026-03-26 06:19:26.076845 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - magnum-conductor (vhost: /, messages: 0) 2026-03-26 06:19:26.077054 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - magnum-conductor.dxt76xirij65 (vhost: /, messages: 0) 2026-03-26 06:19:26.077183 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - magnum-conductor.u74qoctmj2at (vhost: /, messages: 0) 2026-03-26 06:19:26.077465 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - magnum-conductor.ylgl7fnvnb3q (vhost: /, messages: 0) 2026-03-26 06:19:26.077564 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - magnum-conductor_fanout_114f868d641941c393c267ebefb2d881 (vhost: /, messages: 0) 2026-03-26 06:19:26.077926 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - magnum-conductor_fanout_25275d6376434f9a88947423f3f0ddf9 (vhost: /, messages: 0) 2026-03-26 06:19:26.078755 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - magnum-conductor_fanout_4ada99082e6346f38692f45c23a45fb8 (vhost: /, messages: 0) 2026-03-26 06:19:26.078782 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - magnum-conductor_fanout_4b77c13e89a54dfdab9272dd2f0ef97f (vhost: /, messages: 0) 2026-03-26 06:19:26.078972 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - magnum-conductor_fanout_a7c5e48d18c443a1956b99c2edbcbc94 (vhost: /, messages: 0) 2026-03-26 06:19:26.079116 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - magnum-conductor_fanout_b3b098cb100849bab05884917ef2e4e2 (vhost: /, messages: 0) 2026-03-26 06:19:26.079370 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - magnum-conductor_fanout_e4ecb03c594f47fe9337c88ff65131bd (vhost: /, messages: 0) 2026-03-26 06:19:26.079384 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - magnum-conductor_fanout_ee984a30032243dfb0cc80ab0c2b8384 (vhost: /, messages: 0) 2026-03-26 06:19:26.079672 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - magnum-conductor_fanout_f4be591b6d034162b990cb9296faba3f (vhost: /, messages: 0) 2026-03-26 06:19:26.079908 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - manila-data (vhost: /, messages: 0) 2026-03-26 06:19:26.080231 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - manila-data.testbed-node-0 (vhost: /, messages: 0) 2026-03-26 06:19:26.080245 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - manila-data.testbed-node-1 (vhost: /, messages: 0) 2026-03-26 06:19:26.080467 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - manila-data.testbed-node-2 (vhost: /, messages: 0) 2026-03-26 06:19:26.080482 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - manila-data_fanout_c0ecdb1bd3924f28b9e1bbc7eccc646b (vhost: /, messages: 0) 2026-03-26 06:19:26.080609 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - manila-data_fanout_f03d3befe0314d879d7f796de934dd9d (vhost: /, messages: 0) 2026-03-26 06:19:26.080906 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - manila-data_fanout_f85984553bde4d31bf278d156c7ac211 (vhost: /, messages: 0) 2026-03-26 06:19:26.080932 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - manila-scheduler (vhost: /, messages: 0) 2026-03-26 06:19:26.081144 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - manila-scheduler.testbed-node-0 (vhost: /, messages: 0) 2026-03-26 06:19:26.081306 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - manila-scheduler.testbed-node-1 (vhost: /, messages: 0) 2026-03-26 06:19:26.081319 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - manila-scheduler.testbed-node-2 (vhost: /, messages: 0) 2026-03-26 06:19:26.081679 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - manila-scheduler_fanout_a136d7d13b474010a342264c2c907e16 (vhost: /, messages: 0) 2026-03-26 06:19:26.081849 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - manila-scheduler_fanout_e2eb07a671e64e1686089b8a01af098e (vhost: /, messages: 0) 2026-03-26 06:19:26.082175 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - manila-scheduler_fanout_f7be0f8d31c34c4da429bfe2aca47847 (vhost: /, messages: 0) 2026-03-26 06:19:26.082203 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - manila-share (vhost: /, messages: 0) 2026-03-26 06:19:26.082307 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - manila-share.testbed-node-0@cephfsnative1 (vhost: /, messages: 0) 2026-03-26 06:19:26.082573 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - manila-share.testbed-node-1@cephfsnative1 (vhost: /, messages: 0) 2026-03-26 06:19:26.082587 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - manila-share.testbed-node-2@cephfsnative1 (vhost: /, messages: 0) 2026-03-26 06:19:26.082816 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - manila-share_fanout_7805b1c8e1414693b0ba1f609b02809a (vhost: /, messages: 0) 2026-03-26 06:19:26.082990 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - manila-share_fanout_bc63684f7fde4834a4efc8c16b403a68 (vhost: /, messages: 0) 2026-03-26 06:19:26.083246 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - manila-share_fanout_fc5e9c2e25184fcfb3dfda7a4dc4dc33 (vhost: /, messages: 0) 2026-03-26 06:19:26.083347 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - notifications.audit (vhost: /, messages: 0) 2026-03-26 06:19:26.083359 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - notifications.critical (vhost: /, messages: 0) 2026-03-26 06:19:26.083573 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - notifications.debug (vhost: /, messages: 0) 2026-03-26 06:19:26.083775 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - notifications.error (vhost: /, messages: 0) 2026-03-26 06:19:26.083798 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - notifications.info (vhost: /, messages: 0) 2026-03-26 06:19:26.083968 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - notifications.sample (vhost: /, messages: 0) 2026-03-26 06:19:26.084096 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - notifications.warn (vhost: /, messages: 0) 2026-03-26 06:19:26.084151 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - octavia_provisioning_v2 (vhost: /, messages: 0) 2026-03-26 06:19:26.084380 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - octavia_provisioning_v2.testbed-node-0 (vhost: /, messages: 0) 2026-03-26 06:19:26.084564 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - octavia_provisioning_v2.testbed-node-1 (vhost: /, messages: 0) 2026-03-26 06:19:26.084690 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - octavia_provisioning_v2.testbed-node-2 (vhost: /, messages: 0) 2026-03-26 06:19:26.084801 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - octavia_provisioning_v2_fanout_0fac4d064de84e5bbd6d466dd834e1f4 (vhost: /, messages: 0) 2026-03-26 06:19:26.085092 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - octavia_provisioning_v2_fanout_86ab57fd165d4bd783e3c86dd75f8847 (vhost: /, messages: 0) 2026-03-26 06:19:26.085117 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - octavia_provisioning_v2_fanout_c60f129848694c27b399ca7ddbf06f06 (vhost: /, messages: 0) 2026-03-26 06:19:26.085225 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - producer (vhost: /, messages: 0) 2026-03-26 06:19:26.085328 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - producer.testbed-node-0 (vhost: /, messages: 0) 2026-03-26 06:19:26.085542 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - producer.testbed-node-1 (vhost: /, messages: 0) 2026-03-26 06:19:26.085654 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - producer.testbed-node-2 (vhost: /, messages: 0) 2026-03-26 06:19:26.086097 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - producer_fanout_1598d067d87e4d04b00c428b4b47727f (vhost: /, messages: 0) 2026-03-26 06:19:26.086408 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - producer_fanout_36061f07216b4217adb49fe71c35eaf8 (vhost: /, messages: 0) 2026-03-26 06:19:26.086423 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - producer_fanout_43e908b0d0d945869d271173a5152aba (vhost: /, messages: 0) 2026-03-26 06:19:26.086447 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - producer_fanout_ae6a2357e6a34a50b6620a7e1a181a77 (vhost: /, messages: 0) 2026-03-26 06:19:26.086454 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - producer_fanout_cbd89788cf2f40d2a554ea5281955327 (vhost: /, messages: 0) 2026-03-26 06:19:26.086763 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - producer_fanout_e0f7d2d7fb274163a86be0a37dfd2cd8 (vhost: /, messages: 0) 2026-03-26 06:19:26.086777 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - q-plugin (vhost: /, messages: 0) 2026-03-26 06:19:26.086784 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - q-plugin.testbed-node-0 (vhost: /, messages: 0) 2026-03-26 06:19:26.086791 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - q-plugin.testbed-node-1 (vhost: /, messages: 0) 2026-03-26 06:19:26.086985 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - q-plugin.testbed-node-2 (vhost: /, messages: 0) 2026-03-26 06:19:26.087006 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - q-plugin_fanout_44a4c0e6ec6844979a126b54345b60f3 (vhost: /, messages: 0) 2026-03-26 06:19:26.087163 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - q-plugin_fanout_63aafcca795041abb90dcd32d46b2b55 (vhost: /, messages: 0) 2026-03-26 06:19:26.087175 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - q-plugin_fanout_80382bcbeca9481f812885405d457324 (vhost: /, messages: 0) 2026-03-26 06:19:26.087237 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - q-plugin_fanout_a296e3d4085d407584dbef3c971a0f2a (vhost: /, messages: 0) 2026-03-26 06:19:26.087350 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - q-plugin_fanout_b4cf746c93324effb7ff69cab9c2350a (vhost: /, messages: 0) 2026-03-26 06:19:26.087459 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - q-plugin_fanout_c3871e367a2d4d3baf01c5aea2c1919a (vhost: /, messages: 0) 2026-03-26 06:19:26.087547 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - q-plugin_fanout_f41bb944c69d473ab16fa6b5d6d35e10 (vhost: /, messages: 0) 2026-03-26 06:19:26.087790 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - q-plugin_fanout_f7d147aad30f4bcb9adb25eeb33c86c0 (vhost: /, messages: 0) 2026-03-26 06:19:26.087811 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - q-plugin_fanout_f9b48b37b74d417d94f841025641bd73 (vhost: /, messages: 0) 2026-03-26 06:19:26.088033 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - q-reports-plugin (vhost: /, messages: 0) 2026-03-26 06:19:26.088045 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - q-reports-plugin.testbed-node-0 (vhost: /, messages: 0) 2026-03-26 06:19:26.088052 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - q-reports-plugin.testbed-node-1 (vhost: /, messages: 0) 2026-03-26 06:19:26.088162 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - q-reports-plugin.testbed-node-2 (vhost: /, messages: 0) 2026-03-26 06:19:26.088231 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - q-reports-plugin_fanout_1f4456cdaa8043e3be2805ad39d5ab25 (vhost: /, messages: 0) 2026-03-26 06:19:26.088382 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - q-reports-plugin_fanout_26775edc33404dbc9abe1e144bd80525 (vhost: /, messages: 0) 2026-03-26 06:19:26.088482 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - q-reports-plugin_fanout_31a15587650e47c3bde5abc053e22c01 (vhost: /, messages: 0) 2026-03-26 06:19:26.088659 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - q-reports-plugin_fanout_32ec2ee87c0245788dffd816ea3c023c (vhost: /, messages: 0) 2026-03-26 06:19:26.088766 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - q-reports-plugin_fanout_4a6c18d1e8814a9dae92e88643a4f6a3 (vhost: /, messages: 0) 2026-03-26 06:19:26.088847 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - q-reports-plugin_fanout_538ba911017345b08e67db1a5c5b55b5 (vhost: /, messages: 0) 2026-03-26 06:19:26.089086 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - q-reports-plugin_fanout_5897b7a973ad4acc91343aa05c7af3ff (vhost: /, messages: 0) 2026-03-26 06:19:26.089168 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - q-reports-plugin_fanout_6319d5e241c541e58310e16e9dc52144 (vhost: /, messages: 0) 2026-03-26 06:19:26.089308 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - q-reports-plugin_fanout_6b360b1659a14d0da13ea3b346e563aa (vhost: /, messages: 0) 2026-03-26 06:19:26.089442 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - q-reports-plugin_fanout_6f16a59b15654f4dbcf85abee6644a2b (vhost: /, messages: 0) 2026-03-26 06:19:26.089604 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - q-reports-plugin_fanout_94ee497291ae489a9c6b238ca80e38fd (vhost: /, messages: 0) 2026-03-26 06:19:26.089618 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - q-reports-plugin_fanout_9e52c6dc9ed24a7b80ea13a94674ad74 (vhost: /, messages: 0) 2026-03-26 06:19:26.089625 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - q-reports-plugin_fanout_aa918d06050f4bd7a2ee224ba419ce31 (vhost: /, messages: 0) 2026-03-26 06:19:26.089766 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - q-reports-plugin_fanout_c1cdb63a10e04fc6bf41640f72fa8159 (vhost: /, messages: 0) 2026-03-26 06:19:26.089871 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - q-reports-plugin_fanout_cfb3d53116bd463396bf620650870a65 (vhost: /, messages: 0) 2026-03-26 06:19:26.090323 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - q-reports-plugin_fanout_d09b359918574dcab62f2ee6b5dcdca8 (vhost: /, messages: 0) 2026-03-26 06:19:26.090336 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - q-reports-plugin_fanout_dad219db8f49450d9a2b986223f703bf (vhost: /, messages: 0) 2026-03-26 06:19:26.090557 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - q-reports-plugin_fanout_dd786927624c419ead6015dcfc58770f (vhost: /, messages: 0) 2026-03-26 06:19:26.090570 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - q-server-resource-versions (vhost: /, messages: 0) 2026-03-26 06:19:26.090578 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - q-server-resource-versions.testbed-node-0 (vhost: /, messages: 0) 2026-03-26 06:19:26.090585 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - q-server-resource-versions.testbed-node-1 (vhost: /, messages: 0) 2026-03-26 06:19:26.090931 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - q-server-resource-versions.testbed-node-2 (vhost: /, messages: 0) 2026-03-26 06:19:26.090943 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - q-server-resource-versions_fanout_1cb9224fc37a4a48bfacc80953dd403f (vhost: /, messages: 0) 2026-03-26 06:19:26.090960 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - q-server-resource-versions_fanout_3135493c2b264f71bb9bbad34fb00452 (vhost: /, messages: 0) 2026-03-26 06:19:26.091172 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - q-server-resource-versions_fanout_3d6dfb328ccc46a6a9cb2f4b96e3d697 (vhost: /, messages: 0) 2026-03-26 06:19:26.091184 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - q-server-resource-versions_fanout_408591ee4ea44bc4bacb6ad94f1cf98e (vhost: /, messages: 0) 2026-03-26 06:19:26.091191 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - q-server-resource-versions_fanout_4500faec64a44eedbcb004f0e67b9b20 (vhost: /, messages: 0) 2026-03-26 06:19:26.091345 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - q-server-resource-versions_fanout_64da208a82c3494385ded2e418c9a122 (vhost: /, messages: 0) 2026-03-26 06:19:26.091887 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - q-server-resource-versions_fanout_651674cdba1c43a9813d7ecb595a9cfb (vhost: /, messages: 0) 2026-03-26 06:19:26.091900 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - q-server-resource-versions_fanout_c6e201d63611428cad0b60e848404955 (vhost: /, messages: 0) 2026-03-26 06:19:26.092181 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - q-server-resource-versions_fanout_fa758238da8947f4b71f8bcdaabb883b (vhost: /, messages: 0) 2026-03-26 06:19:26.092192 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - reply_0143f57daf2b426b89201cef986f95e2 (vhost: /, messages: 0) 2026-03-26 06:19:26.092199 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - reply_1607a4c70ae045398f71fc2d451c5a27 (vhost: /, messages: 0) 2026-03-26 06:19:26.092452 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - reply_2d05cc2aedda45ea9c31797c9488552e (vhost: /, messages: 0) 2026-03-26 06:19:26.092465 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - reply_2df21c6aa5e242c2a4bd19ebf039c414 (vhost: /, messages: 0) 2026-03-26 06:19:26.092472 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - reply_42c674747649424c96f46572b03ea0da (vhost: /, messages: 0) 2026-03-26 06:19:26.092659 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - reply_4ef0c8bf75cf449ca8256621ad7454ac (vhost: /, messages: 0) 2026-03-26 06:19:26.092671 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - reply_550a4fdad56e49009b36c1374a865c76 (vhost: /, messages: 0) 2026-03-26 06:19:26.092758 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - reply_919bd19646db497b8240f22dec5e8b0c (vhost: /, messages: 0) 2026-03-26 06:19:26.092845 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - reply_91fe99c6abc64314ada5daa43f18f50b (vhost: /, messages: 0) 2026-03-26 06:19:26.092856 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - reply_9db8e0e8fdb44324a47d8bfc1ac1f280 (vhost: /, messages: 0) 2026-03-26 06:19:26.093062 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - reply_a0bf128c59ef4dcf98d30d7684f34f2e (vhost: /, messages: 0) 2026-03-26 06:19:26.093176 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - reply_a1d93d6f81c749c9b16200e2c0c20728 (vhost: /, messages: 0) 2026-03-26 06:19:26.093188 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - reply_a363e6378e634bcaaa24df418226ca2b (vhost: /, messages: 0) 2026-03-26 06:19:26.093195 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - reply_d2aaa10c347d4f878af4a93b4462aca8 (vhost: /, messages: 0) 2026-03-26 06:19:26.093456 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - reply_dcf1dff11157461c91e0f07e1149d443 (vhost: /, messages: 0) 2026-03-26 06:19:26.093526 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - reply_e5a2bc26f56f421d9a8b1028dfa744bf (vhost: /, messages: 0) 2026-03-26 06:19:26.093881 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - reply_e6549370496a4080a6647b33c1437c75 (vhost: /, messages: 1) 2026-03-26 06:19:26.093902 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - reply_ece0d2b5146749299699600bd6620114 (vhost: /, messages: 0) 2026-03-26 06:19:26.093909 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - reply_ee22625ab8d44350bdd04b376e72d57f (vhost: /, messages: 0) 2026-03-26 06:19:26.094300 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - scheduler (vhost: /, messages: 0) 2026-03-26 06:19:26.094319 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - scheduler.testbed-node-0 (vhost: /, messages: 0) 2026-03-26 06:19:26.094334 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - scheduler.testbed-node-1 (vhost: /, messages: 0) 2026-03-26 06:19:26.094342 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - scheduler.testbed-node-2 (vhost: /, messages: 0) 2026-03-26 06:19:26.094501 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - scheduler_fanout_17c056e993ef47e3acc112af16d457e3 (vhost: /, messages: 0) 2026-03-26 06:19:26.094513 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - scheduler_fanout_495244cec2c64f3c9bdf3534c4821ed3 (vhost: /, messages: 0) 2026-03-26 06:19:26.094591 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - scheduler_fanout_4f6c87c162d1459ea749bf93d9575641 (vhost: /, messages: 0) 2026-03-26 06:19:26.094602 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - scheduler_fanout_5825e34b1d114c22a0ea70626d5b3c06 (vhost: /, messages: 0) 2026-03-26 06:19:26.094758 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - scheduler_fanout_9dd488cc166144d3b66b6167500e5b69 (vhost: /, messages: 0) 2026-03-26 06:19:26.094771 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - scheduler_fanout_d8d2fe93979a46f4ad96c7e79aad00d7 (vhost: /, messages: 0) 2026-03-26 06:19:26.094987 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - worker (vhost: /, messages: 0) 2026-03-26 06:19:26.094998 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - worker.testbed-node-0 (vhost: /, messages: 0) 2026-03-26 06:19:26.095136 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - worker.testbed-node-1 (vhost: /, messages: 0) 2026-03-26 06:19:26.095147 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - worker.testbed-node-2 (vhost: /, messages: 0) 2026-03-26 06:19:26.095258 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - worker_fanout_360ce71e2a2b43c898e071ad25434240 (vhost: /, messages: 0) 2026-03-26 06:19:26.095268 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - worker_fanout_4b3c4dcb71b947008464d6f8006593a6 (vhost: /, messages: 0) 2026-03-26 06:19:26.095506 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - worker_fanout_4f0de0701f684613b3655e8683bea71b (vhost: /, messages: 0) 2026-03-26 06:19:26.095518 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - worker_fanout_5a8bae1580fd4d94b676dd0da9c1331b (vhost: /, messages: 0) 2026-03-26 06:19:26.095525 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - worker_fanout_c08facaf21194343b22a11cadc24a44e (vhost: /, messages: 0) 2026-03-26 06:19:26.095532 | orchestrator | 2026-03-26 06:19:26 | INFO  |  - worker_fanout_cc85c421d64b4d6b9fbd61b01ede5463 (vhost: /, messages: 0) 2026-03-26 06:19:26.329563 | orchestrator | + osism migrate rabbitmq3to4 list-exchanges 2026-03-26 06:19:28.071644 | orchestrator | usage: osism migrate rabbitmq3to4 [-h] [--server SERVER] [--dry-run] 2026-03-26 06:19:28.071734 | orchestrator | [--no-close-connections] [--quorum] 2026-03-26 06:19:28.071750 | orchestrator | [--vhost VHOST] 2026-03-26 06:19:28.071762 | orchestrator | [{list,delete,prepare,check}] 2026-03-26 06:19:28.071774 | orchestrator | [{aodh,barbican,ceilometer,cinder,designate,notifications,manager,magnum,manila,neutron,nova,octavia}] 2026-03-26 06:19:28.071786 | orchestrator | osism migrate rabbitmq3to4: error: argument command: invalid choice: 'list-exchanges' (choose from list, delete, prepare, check) 2026-03-26 06:19:28.681588 | orchestrator | ERROR 2026-03-26 06:19:28.681795 | orchestrator | { 2026-03-26 06:19:28.681832 | orchestrator | "delta": "2:04:28.073244", 2026-03-26 06:19:28.681855 | orchestrator | "end": "2026-03-26 06:19:28.265674", 2026-03-26 06:19:28.681877 | orchestrator | "msg": "non-zero return code", 2026-03-26 06:19:28.681896 | orchestrator | "rc": 2, 2026-03-26 06:19:28.681916 | orchestrator | "start": "2026-03-26 04:15:00.192430" 2026-03-26 06:19:28.681935 | orchestrator | } failure 2026-03-26 06:19:28.965863 | 2026-03-26 06:19:28.966163 | PLAY RECAP 2026-03-26 06:19:28.966313 | orchestrator | ok: 30 changed: 11 unreachable: 0 failed: 1 skipped: 6 rescued: 0 ignored: 0 2026-03-26 06:19:28.966385 | 2026-03-26 06:19:29.244090 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/upgrade-stable.yml@main] 2026-03-26 06:19:29.246749 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-03-26 06:19:29.983445 | 2026-03-26 06:19:29.983613 | PLAY [Post output play] 2026-03-26 06:19:30.001344 | 2026-03-26 06:19:30.001490 | LOOP [stage-output : Register sources] 2026-03-26 06:19:30.073187 | 2026-03-26 06:19:30.073541 | TASK [stage-output : Check sudo] 2026-03-26 06:19:30.960099 | orchestrator | sudo: a password is required 2026-03-26 06:19:31.119865 | orchestrator | ok: Runtime: 0:00:00.016472 2026-03-26 06:19:31.136522 | 2026-03-26 06:19:31.136678 | LOOP [stage-output : Set source and destination for files and folders] 2026-03-26 06:19:31.175655 | 2026-03-26 06:19:31.176703 | TASK [stage-output : Build a list of source, dest dictionaries] 2026-03-26 06:19:31.249418 | orchestrator | ok 2026-03-26 06:19:31.258734 | 2026-03-26 06:19:31.258895 | LOOP [stage-output : Ensure target folders exist] 2026-03-26 06:19:31.741167 | orchestrator | ok: "docs" 2026-03-26 06:19:31.741494 | 2026-03-26 06:19:31.972381 | orchestrator | ok: "artifacts" 2026-03-26 06:19:32.225739 | orchestrator | ok: "logs" 2026-03-26 06:19:32.250506 | 2026-03-26 06:19:32.250693 | LOOP [stage-output : Copy files and folders to staging folder] 2026-03-26 06:19:32.290160 | 2026-03-26 06:19:32.290436 | TASK [stage-output : Make all log files readable] 2026-03-26 06:19:32.571905 | orchestrator | ok 2026-03-26 06:19:32.581711 | 2026-03-26 06:19:32.581858 | TASK [stage-output : Rename log files that match extensions_to_txt] 2026-03-26 06:19:32.616701 | orchestrator | skipping: Conditional result was False 2026-03-26 06:19:32.633408 | 2026-03-26 06:19:32.633567 | TASK [stage-output : Discover log files for compression] 2026-03-26 06:19:32.657903 | orchestrator | skipping: Conditional result was False 2026-03-26 06:19:32.670496 | 2026-03-26 06:19:32.670640 | LOOP [stage-output : Archive everything from logs] 2026-03-26 06:19:32.712411 | 2026-03-26 06:19:32.712581 | PLAY [Post cleanup play] 2026-03-26 06:19:32.721128 | 2026-03-26 06:19:32.721232 | TASK [Set cloud fact (Zuul deployment)] 2026-03-26 06:19:32.777315 | orchestrator | ok 2026-03-26 06:19:32.789563 | 2026-03-26 06:19:32.789696 | TASK [Set cloud fact (local deployment)] 2026-03-26 06:19:32.823892 | orchestrator | skipping: Conditional result was False 2026-03-26 06:19:32.839447 | 2026-03-26 06:19:32.839582 | TASK [Clean the cloud environment] 2026-03-26 06:19:33.437450 | orchestrator | 2026-03-26 06:19:33 - clean up servers 2026-03-26 06:19:34.200121 | orchestrator | 2026-03-26 06:19:34 - testbed-manager 2026-03-26 06:19:34.283090 | orchestrator | 2026-03-26 06:19:34 - testbed-node-4 2026-03-26 06:19:34.373580 | orchestrator | 2026-03-26 06:19:34 - testbed-node-5 2026-03-26 06:19:34.460335 | orchestrator | 2026-03-26 06:19:34 - testbed-node-0 2026-03-26 06:19:34.554949 | orchestrator | 2026-03-26 06:19:34 - testbed-node-3 2026-03-26 06:19:34.649186 | orchestrator | 2026-03-26 06:19:34 - testbed-node-1 2026-03-26 06:19:34.747097 | orchestrator | 2026-03-26 06:19:34 - testbed-node-2 2026-03-26 06:19:34.835270 | orchestrator | 2026-03-26 06:19:34 - clean up keypairs 2026-03-26 06:19:34.853981 | orchestrator | 2026-03-26 06:19:34 - testbed 2026-03-26 06:19:34.880593 | orchestrator | 2026-03-26 06:19:34 - wait for servers to be gone 2026-03-26 06:19:45.788468 | orchestrator | 2026-03-26 06:19:45 - clean up ports 2026-03-26 06:19:45.995039 | orchestrator | 2026-03-26 06:19:45 - 0d011944-d664-48dc-afd5-e876547c06cd 2026-03-26 06:19:46.307701 | orchestrator | 2026-03-26 06:19:46 - 4acb0156-c5c7-4457-a252-e47665f1efbd 2026-03-26 06:19:46.589312 | orchestrator | 2026-03-26 06:19:46 - 5d571e2f-b701-4263-883f-e7221cc0c90a 2026-03-26 06:19:46.810588 | orchestrator | 2026-03-26 06:19:46 - be5d018b-5284-4a13-84e4-e85247bdd4a2 2026-03-26 06:19:47.210862 | orchestrator | 2026-03-26 06:19:47 - c09a035e-b37c-4623-b251-1a605583e2e5 2026-03-26 06:19:47.440459 | orchestrator | 2026-03-26 06:19:47 - d0917be5-d774-4ae8-aa8e-88d5b51d1da3 2026-03-26 06:19:47.651879 | orchestrator | 2026-03-26 06:19:47 - da761b17-04fe-4927-b0ae-964e0d48a3d8 2026-03-26 06:19:47.860820 | orchestrator | 2026-03-26 06:19:47 - clean up volumes 2026-03-26 06:19:48.002124 | orchestrator | 2026-03-26 06:19:48 - testbed-volume-3-node-base 2026-03-26 06:19:48.041287 | orchestrator | 2026-03-26 06:19:48 - testbed-volume-2-node-base 2026-03-26 06:19:48.090934 | orchestrator | 2026-03-26 06:19:48 - testbed-volume-5-node-base 2026-03-26 06:19:48.134391 | orchestrator | 2026-03-26 06:19:48 - testbed-volume-4-node-base 2026-03-26 06:19:48.177557 | orchestrator | 2026-03-26 06:19:48 - testbed-volume-manager-base 2026-03-26 06:19:48.221366 | orchestrator | 2026-03-26 06:19:48 - testbed-volume-1-node-base 2026-03-26 06:19:48.261329 | orchestrator | 2026-03-26 06:19:48 - testbed-volume-0-node-base 2026-03-26 06:19:48.303447 | orchestrator | 2026-03-26 06:19:48 - testbed-volume-0-node-3 2026-03-26 06:19:48.348314 | orchestrator | 2026-03-26 06:19:48 - testbed-volume-4-node-4 2026-03-26 06:19:48.388489 | orchestrator | 2026-03-26 06:19:48 - testbed-volume-5-node-5 2026-03-26 06:19:48.433826 | orchestrator | 2026-03-26 06:19:48 - testbed-volume-8-node-5 2026-03-26 06:19:48.477271 | orchestrator | 2026-03-26 06:19:48 - testbed-volume-3-node-3 2026-03-26 06:19:48.518335 | orchestrator | 2026-03-26 06:19:48 - testbed-volume-1-node-4 2026-03-26 06:19:48.566821 | orchestrator | 2026-03-26 06:19:48 - testbed-volume-7-node-4 2026-03-26 06:19:48.610692 | orchestrator | 2026-03-26 06:19:48 - testbed-volume-2-node-5 2026-03-26 06:19:48.653521 | orchestrator | 2026-03-26 06:19:48 - testbed-volume-6-node-3 2026-03-26 06:19:48.695296 | orchestrator | 2026-03-26 06:19:48 - disconnect routers 2026-03-26 06:19:48.763281 | orchestrator | 2026-03-26 06:19:48 - testbed 2026-03-26 06:19:50.483757 | orchestrator | 2026-03-26 06:19:50 - clean up subnets 2026-03-26 06:19:50.553346 | orchestrator | 2026-03-26 06:19:50 - subnet-testbed-management 2026-03-26 06:19:50.737835 | orchestrator | 2026-03-26 06:19:50 - clean up networks 2026-03-26 06:19:50.903189 | orchestrator | 2026-03-26 06:19:50 - net-testbed-management 2026-03-26 06:19:51.205292 | orchestrator | 2026-03-26 06:19:51 - clean up security groups 2026-03-26 06:19:51.250012 | orchestrator | 2026-03-26 06:19:51 - testbed-management 2026-03-26 06:19:51.359447 | orchestrator | 2026-03-26 06:19:51 - testbed-node 2026-03-26 06:19:51.471693 | orchestrator | 2026-03-26 06:19:51 - clean up floating ips 2026-03-26 06:19:51.515956 | orchestrator | 2026-03-26 06:19:51 - 81.163.192.54 2026-03-26 06:19:52.266257 | orchestrator | 2026-03-26 06:19:52 - clean up routers 2026-03-26 06:19:52.390552 | orchestrator | 2026-03-26 06:19:52 - testbed 2026-03-26 06:19:53.401489 | orchestrator | ok: Runtime: 0:00:20.185435 2026-03-26 06:19:53.406794 | 2026-03-26 06:19:53.407036 | PLAY RECAP 2026-03-26 06:19:53.407168 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2026-03-26 06:19:53.407231 | 2026-03-26 06:19:53.547231 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-03-26 06:19:53.549692 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-03-26 06:19:54.282313 | 2026-03-26 06:19:54.282478 | PLAY [Cleanup play] 2026-03-26 06:19:54.298324 | 2026-03-26 06:19:54.298461 | TASK [Set cloud fact (Zuul deployment)] 2026-03-26 06:19:54.366867 | orchestrator | ok 2026-03-26 06:19:54.375953 | 2026-03-26 06:19:54.376141 | TASK [Set cloud fact (local deployment)] 2026-03-26 06:19:54.410354 | orchestrator | skipping: Conditional result was False 2026-03-26 06:19:54.430554 | 2026-03-26 06:19:54.430781 | TASK [Clean the cloud environment] 2026-03-26 06:19:55.561120 | orchestrator | 2026-03-26 06:19:55 - clean up servers 2026-03-26 06:19:56.024873 | orchestrator | 2026-03-26 06:19:56 - clean up keypairs 2026-03-26 06:19:56.044622 | orchestrator | 2026-03-26 06:19:56 - wait for servers to be gone 2026-03-26 06:19:56.102399 | orchestrator | 2026-03-26 06:19:56 - clean up ports 2026-03-26 06:19:56.181626 | orchestrator | 2026-03-26 06:19:56 - clean up volumes 2026-03-26 06:19:56.241997 | orchestrator | 2026-03-26 06:19:56 - disconnect routers 2026-03-26 06:19:56.273659 | orchestrator | 2026-03-26 06:19:56 - clean up subnets 2026-03-26 06:19:56.300038 | orchestrator | 2026-03-26 06:19:56 - clean up networks 2026-03-26 06:19:56.469942 | orchestrator | 2026-03-26 06:19:56 - clean up security groups 2026-03-26 06:19:56.506284 | orchestrator | 2026-03-26 06:19:56 - clean up floating ips 2026-03-26 06:19:56.530478 | orchestrator | 2026-03-26 06:19:56 - clean up routers 2026-03-26 06:19:56.981532 | orchestrator | ok: Runtime: 0:00:01.349109 2026-03-26 06:19:56.985359 | 2026-03-26 06:19:56.985523 | PLAY RECAP 2026-03-26 06:19:56.985655 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2026-03-26 06:19:56.985721 | 2026-03-26 06:19:57.108220 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-03-26 06:19:57.110606 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-03-26 06:19:57.845131 | 2026-03-26 06:19:57.845297 | PLAY [Base post-fetch] 2026-03-26 06:19:57.861023 | 2026-03-26 06:19:57.861159 | TASK [fetch-output : Set log path for multiple nodes] 2026-03-26 06:19:57.917030 | orchestrator | skipping: Conditional result was False 2026-03-26 06:19:57.932053 | 2026-03-26 06:19:57.932258 | TASK [fetch-output : Set log path for single node] 2026-03-26 06:19:57.978146 | orchestrator | ok 2026-03-26 06:19:57.984271 | 2026-03-26 06:19:57.984385 | LOOP [fetch-output : Ensure local output dirs] 2026-03-26 06:19:58.493826 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/6d507829b6994532b2cddf15505f7f09/work/logs" 2026-03-26 06:19:58.772542 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/6d507829b6994532b2cddf15505f7f09/work/artifacts" 2026-03-26 06:19:59.070046 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/6d507829b6994532b2cddf15505f7f09/work/docs" 2026-03-26 06:19:59.097116 | 2026-03-26 06:19:59.097296 | LOOP [fetch-output : Collect logs, artifacts and docs] 2026-03-26 06:20:00.031466 | orchestrator | changed: .d..t...... ./ 2026-03-26 06:20:00.031724 | orchestrator | changed: All items complete 2026-03-26 06:20:00.031761 | 2026-03-26 06:20:00.790526 | orchestrator | changed: .d..t...... ./ 2026-03-26 06:20:01.507190 | orchestrator | changed: .d..t...... ./ 2026-03-26 06:20:01.536479 | 2026-03-26 06:20:01.536668 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2026-03-26 06:20:01.573677 | orchestrator | skipping: Conditional result was False 2026-03-26 06:20:01.577294 | orchestrator | skipping: Conditional result was False 2026-03-26 06:20:01.596884 | 2026-03-26 06:20:01.597040 | PLAY RECAP 2026-03-26 06:20:01.597134 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2026-03-26 06:20:01.597174 | 2026-03-26 06:20:01.738265 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-03-26 06:20:01.741040 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-03-26 06:20:02.474559 | 2026-03-26 06:20:02.474727 | PLAY [Base post] 2026-03-26 06:20:02.489321 | 2026-03-26 06:20:02.489461 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2026-03-26 06:20:03.466574 | orchestrator | changed 2026-03-26 06:20:03.476248 | 2026-03-26 06:20:03.476379 | PLAY RECAP 2026-03-26 06:20:03.476451 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-03-26 06:20:03.476522 | 2026-03-26 06:20:03.600210 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-03-26 06:20:03.602754 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2026-03-26 06:20:04.395666 | 2026-03-26 06:20:04.395839 | PLAY [Base post-logs] 2026-03-26 06:20:04.406532 | 2026-03-26 06:20:04.406668 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2026-03-26 06:20:04.870796 | localhost | changed 2026-03-26 06:20:04.885168 | 2026-03-26 06:20:04.885334 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2026-03-26 06:20:04.911805 | localhost | ok 2026-03-26 06:20:04.916009 | 2026-03-26 06:20:04.916131 | TASK [Set zuul-log-path fact] 2026-03-26 06:20:04.931658 | localhost | ok 2026-03-26 06:20:04.942358 | 2026-03-26 06:20:04.942471 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-03-26 06:20:04.969393 | localhost | ok 2026-03-26 06:20:04.975622 | 2026-03-26 06:20:04.975780 | TASK [upload-logs : Create log directories] 2026-03-26 06:20:05.496145 | localhost | changed 2026-03-26 06:20:05.501050 | 2026-03-26 06:20:05.501218 | TASK [upload-logs : Ensure logs are readable before uploading] 2026-03-26 06:20:06.014264 | localhost -> localhost | ok: Runtime: 0:00:00.007098 2026-03-26 06:20:06.024055 | 2026-03-26 06:20:06.024245 | TASK [upload-logs : Upload logs to log server] 2026-03-26 06:20:06.590600 | localhost | Output suppressed because no_log was given 2026-03-26 06:20:06.593495 | 2026-03-26 06:20:06.593659 | LOOP [upload-logs : Compress console log and json output] 2026-03-26 06:20:06.665607 | localhost | skipping: Conditional result was False 2026-03-26 06:20:06.673649 | localhost | skipping: Conditional result was False 2026-03-26 06:20:06.684742 | 2026-03-26 06:20:06.684904 | LOOP [upload-logs : Upload compressed console log and json output] 2026-03-26 06:20:06.735355 | localhost | skipping: Conditional result was False 2026-03-26 06:20:06.735937 | 2026-03-26 06:20:06.739655 | localhost | skipping: Conditional result was False 2026-03-26 06:20:06.753700 | 2026-03-26 06:20:06.753875 | LOOP [upload-logs : Upload console log and json output]